By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Here’s what’s going on with Google’s funny explanations of made-up expressions
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Here’s what’s going on with Google’s funny explanations of made-up expressions
News

Here’s what’s going on with Google’s funny explanations of made-up expressions

News Room
Last updated: 2025/04/27 at 9:29 PM
News Room Published 27 April 2025
Share
SHARE

C. Scott Brown / Android Authority

TL;DR

  • Google AI Overviews are confidently trying to explain nonsense phrases, to great hilarity.
  • Ideally this wouldn’t happen, because AI Overviews are only supposed to appear when Google’s confident in the quality of its output.
  • The line between new phrases and nonsense phrases is a fine one, though, and it’s easy to see the logic Google tries to use to divine meaning.

If you haven’t heard about this phenomenon yet, people are asking Google Search to find the meaning behind various phrases. For actual idioms, this can be really useful, but the problem is that Google’s also pretty willing to dream up its own explanations for expressions that aren’t true idioms at all — just meaningless gibberish. Ask Google what “an empty cat is worth a dog’s ransom” means, and it will do its darndest to extract some semblance of meaning there, even if that’s squeezing blood from a stone.

We got in touch we Google to see what was going on here and the company lays its case out in this official statement:

When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available. This is true of Search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context. AI Overviews are designed to show information backed up by top web results, and their high accuracy rate is on par with other Search features like Featured Snippets.

It seems like the big problem is that it’s not always obvious what these “false premise” searches are in the first place. Language is an evolving thing, and new expressions come into being all the time. People are also prone to mishearing or misremembering things, and may not search for a phrase exactly as it’s intended to be used.

What seems clear from the explanation Google provides alongside these nonsense queries is that it’s still approaching them logically, trying to break down each part and figure out what the speaker could have possibly meant:

search idiom ai explanation

And honestly, it doesn’t do a half bad job. For novel expressions, AI Overview has the resources to draw upon to at least give it a fighting chance of figuring out the intended meaning. So how do you tell the difference between a genuine novel expression, and a nonsense one — a situation Google refers to as a “data void?”

That’s tricky, and Google tells us that it tries to only surface an AI Overview like this when Search has a certain degree of confidence that a summary would be both helpful and of high quality. It’s also constantly refining that cutoff, and while these public fails may just seem silly and entertaining to us, they give Google useful information about the edge cases where AI Overviews struggle to perform as desired.

One of these “hallucinations” on its own is funny, sure, but in the larger context of Google’s AI efforts, we can absolutely appreciate the genuine attempts the company’s systems are making to successfully communicate with users. When we intentionally try to trip it up, should we be surprised that it stumbles?

Maybe the most frustrating part right now is that it’s less than always obvious just how confident Google is in any of the AI Overview results it presents, and a user may not immediately understand if Google’s actually citing someone else’s answer, or just making a best guess. The more clearly it’s able to communicate that with users, the less of an actual problem this sort of situation should prove.

Got a tip? Talk to us! Email our staff at [email protected]. You can stay anonymous or get credit for the info, it’s your choice.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Meet SNATIKA, Winner of HackerNoon’s Startups of The Year 2024 in Training And Consulting | HackerNoon
Next Article Meet Ambassador, Winner of Startups of The Year 2024 in Denver for Tech | HackerNoon
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

The Pixel 10 series may introduce an unexpected storage options limitation
News
How Mawari Reached 1 Million Portal Users and What It Means for Spatial Internet | HackerNoon
Computing
One of the best flagship smartphones of the year is over 20% off for Prime Day | Stuff
Gadget
Blackout risks rising as AI, reindustrialization push strain grid
News

You Might also Like

News

The Pixel 10 series may introduce an unexpected storage options limitation

4 Min Read
News

Blackout risks rising as AI, reindustrialization push strain grid

3 Min Read
News

Australian Airline Qantas Confirms Contact With Possible Hackers

1 Min Read
News

Meta just hired Apple’s head of foundation models – 9to5Mac

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?