By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Google talked AI for 2 hours. It didn’t mention hallucinations.
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Google talked AI for 2 hours. It didn’t mention hallucinations.
News

Google talked AI for 2 hours. It didn’t mention hallucinations.

News Room
Last updated: 2025/05/20 at 8:02 PM
News Room Published 20 May 2025
Share
SHARE

This year, Google I/O 2025 had one focus: Artificial intelligence.

We’ve already covered all of the biggest news to come out of the annual developers conference: a new AI video generation tool called Flow. A $250 AI Ultra subscription plan. Tons of new changes to Gemini. A virtual shopping try-on feature. And critically, the launch of the search tool AI Mode to all users in the United States.

Yet over nearly two hours of Google leaders talking about AI, one word we didn’t hear was “hallucination”.

Hallucinations remain one of the most stubborn and concerning problems with AI models. The term refers to invented facts and inaccuracies that large-language models “hallucinate” in their replies. And according to the big AI brands’ own metrics, hallucinations are getting worse — with some models hallucinating more than 40 percent of the time.

But if you were watching Google I/O 2025, you wouldn’t know this problem existed. You’d think models like Gemini never hallucinate; you would certainly be surprised to see the warning appended to every Google AI Overview. (“AI responses may include mistakes”.)

Mashable Light Speed

The closest Google came to acknowledging the hallucination problem came during a segment of the presentation on AI Mode and Gemini’s Deep Search capabilities. The model would check its own work before delivering an answer, we were told — but without more detail on this process, it sounds more like the blind leading the blind than genuine fact-checking.

For AI skeptics, the degree of confidence Silicon Valley has in these tools seems divorced from actual results. Real users notice when AI tools fail at simple tasks like counting, spellchecking, or answering questions like “Will water freeze at 27 degrees Fahrenheit?”

Google was eager to remind viewers that its newest AI model, Gemini 2.5 Pro, sits atop many AI leaderboards. But when it comes to truthfulness and the ability to answer simple questions, AI chatbots are graded on a curve.

Gemini 2.5 Pro is Google’s most intelligent AI model (according to Google), yet it scores just a 52.9 percent on the Functionality SimpleQA benchmarking test. According to an OpenAI research paper, the SimpleQA test is “a benchmark that evaluates the ability of language models to answer short, fact-seeking questions.” (Emphasis ours.)

A Google representative declined to discuss the SimpleQA benchmark, or hallucinations in general — but did point us to Google’s official Explainer on AI Mode and AI Overviews. Here’s what it has to say:

[AI Mode] uses a large language model to help answer queries and it is possible that, in rare cases, it may sometimes confidently present information that is inaccurate, which is commonly known as ‘hallucination.’ As with AI Overviews, in some cases this experiment may misinterpret web content or miss context, as can happen with any automated system in Search…

We’re also using novel approaches with the model’s reasoning capabilities to improve factuality. For example, in collaboration with Google DeepMind research teams, we use agentic reinforcement learning (RL) in our custom training to reward the model to generate statements it knows are more likely to be accurate (not hallucinated) and also backed up by inputs.

Is Google wrong to be optimistic? Hallucinations may yet prove to be a solvable problem, after all. But it seems increasingly clear from the research that hallucinations from LLMs are not a solvable problem right now.

That hasn’t stopped companies like Google and OpenAI from sprinting ahead into the era of AI Search — and that’s likely to be an error-filled era, unless we’re the ones hallucinating.

Topics
Artificial Intelligence
Google Gemini

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Apple’s Big Summer Movie Expanding to More IMAX Theaters Due to ‘Overwhelming Popularity’
Next Article 10 Design Certifications for Professional Growth in 2025
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Which Microsoft Surface Is Best for You?
Gadget
Cell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI Survey
Computing
Study: Most AI Chatbots Can Be Easily Jailbroken
News
Brand Strategy: Dove Doesn’t Sell Soap. It Sells Self-worth
Computing

You Might also Like

News

Study: Most AI Chatbots Can Be Easily Jailbroken

1 Min Read
News

Wallpaper Wednesday: More great phone wallpapers for all to share (May 21)

5 Min Read
News

So, where do stolen iPhones go? Turns out, there's a whole building for that

6 Min Read
News

AMD takes aim at Intel with new 96-core Threadripper 9000 series CPU

2 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?