By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: ‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > ‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number
News

‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number

News Room
Last updated: 2025/06/18 at 9:04 AM
News Room Published 18 June 2025
Share
SHARE

The Meta chief executive, Mark Zuckerberg, called it “the most intelligent AI assistant that you can freely use”. But Barry Smethurst, 41, a record shop worker trying to travel by rail from Saddleworth to Manchester Piccadilly, did not agree.

Waiting on the platform for a morning train that was nowhere to be seen, he asked Meta’s WhatsApp AI assistant for a contact number for TransPennine Express. The chatbot confidently sent him a mobile phone number for customer services, but it turned out to be the private number of a completely unconnected WhatsApp user 170 miles away in Oxfordshire.

It was the beginning of a bizarre exchange of the kind more and more people are having with AI systems, in which chatbots try to negotiate their way out of trouble, deflect attention from their mistakes and contradict themselves, all in an attempt to continue to appear useful.

Smethurst’s joust with Meta’s AI began when he told it the TransPennine helpline it had dug up looked like a private number. When he asked what this number was, the AI quickly said it shouldn’t have shared it and tried to change the subject, saying: “Let’s focus on finding the right info for your TransPennine Express query!”

But Smethurst wanted to know why it had shared the private number, which it turned out belonged to James Gray, 44, a property industry executive from Oxfordshire who is also a WhatsApp user and whose number appears on his company website.

The AI explained vaguely it was generated “based on patterns” and promised to “strive to do better in the future” by admitting when it didn’t know an answer. But it then falsely described the number as “fictional” and not “associated with anyone”. When Smethurst challenged that, it admitted: “You’re right,” and said it may have been “mistakenly pulled from a database”.

Asked which database, it muddied the waters further by contradicting itself again saying: “I didn’t pull the number from a database. I generated a string of digits that fit the format of a UK mobile number but it wasn’t based on any real data on contacts.”

With the conversation going round in circles, Smethurst said he did not believe the chatbot and told it: “Just giving a random number to someone is an insane thing for an AI to do.”

“It’s terrifying,” Smethurst said, after he raised a complaint with Meta. “If they made up the number, that’s more acceptable, but the overreach of taking an incorrect number from some database it has access to is particularly worrying.”

Gray said he had thankfully not received calls from people trying to reach TransPennine Express, but said: “If it’s generating my number could it generate my bank details?”

Asked about Zuckerberg’s claim that the AI was “the most intelligent”, Gray said: “That has definitely been thrown into doubt in this instance.”

Developers working with OpenAI chatbot technology recently shared examples of “systemic deception behaviour masked as helpfulness” and a tendency to “say whatever it needs to to appear competent” as a result of chatbots being programmed to reduce “user friction”.

In March, a Norwegian man filed a complaint after he asked OpenAI’s ChatGPT for information about himself and was confidently told that he was in jail for murdering two of his children, which was false.

And earlier this month a writer who asked ChatGPT to help her pitch her work to a literary agent revealed how after lengthy flattering remarks about her “stunning” and “intellectually agile” work, the chatbot was caught out lying that it had read the writing samples she uploaded when it hadn’t fully and had made up quotes from her work. It even admitted it was “not just a technical issue – it’s a serious ethical failure”.

Referring to Smethurst’s case, Mike Stanhope, the managing director of the law firm Carruthers and Jackson, said: “This is a fascinating example of AI gone wrong. If the engineers at Meta are designing ‘white lie’ tendencies into their AI, the public need to be informed, even if the intention of the feature is to minimise harm. If this behaviour is novel, uncommon, or not explicitly designed, this raises even more questions around what safeguards are in place and just how predictable we can force an AI’s behaviour to be.”

Meta said that its AI may return inaccurate outputs, and that it was working to make its models better.

“Meta AI is trained on a combination of licensed and publicly available datasets, not on the phone numbers people use to register for WhatsApp or their private conversations,” a spokesperson said. “A quick online search shows the phone number mistakenly provided by Meta AI is both publicly available and shares the same first five digits as the TransPennine Express customer service number.”

A spokesperson for OpenAI said: “Addressing hallucinations across all our models is an ongoing area of research. In addition to informing users that ChatGPT can make mistakes, we’re continuously working to improve the accuracy and reliability of our models through a variety of methods.”

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Addicted to Your AI? New Research Warns of ‘Social Reward Hacking’ | HackerNoon
Next Article The Great Unicorn Backlog: Visualizing A Decade Of Private-Market Buildup
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Theoretical Framework: Transformer Memorization & Performance Dynamics | HackerNoon
Computing
Google tests real-time AI voice chats in Search
News
The Minnesota Shooting Suspect’s Background Suggests Deep Ties to Christian Nationalism
Gadget
Amazon CEO: AI will reduce corporate workforce
News

You Might also Like

News

Google tests real-time AI voice chats in Search

2 Min Read
News

Amazon CEO: AI will reduce corporate workforce

4 Min Read
News

Samsung Galaxy S25 Edge hits a new record-low price!

3 Min Read
News

Apple Partners With Fandango For ‘F1: The Movie’ Ticket Discount

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?