By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: 45% of AI-generated news is wrong, new study warns — here’s what happened when I tested it myself
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > 45% of AI-generated news is wrong, new study warns — here’s what happened when I tested it myself
News

45% of AI-generated news is wrong, new study warns — here’s what happened when I tested it myself

News Room
Last updated: 2025/10/23 at 2:56 PM
News Room Published 23 October 2025
Share
SHARE

AI is more deeply embedded in our daily lives than ever before. It’s blending seamlessly into how we work, search and stay informed. But a new study from the European Broadcasting Union (EBU) issues a stark warning: 45% of AI-generated news responses contain serious errors, and 81% have at least one issue. This could range from outdated information, misleading phrasing, to missing or fabricated sources.

We’ve previously reported that ChatGPT is wrong about 25% of the time. But this new data is even more alarming, especially as tools like ChatGPT Atlas and Google’s AI Overviews are becoming the default way many of us check the news. It’s a reminder that while the convenience is real, so is the risk.

The study: AI assistants fail the accuracy test

(Image credit: Shutterstock)

The EBU study tested more than 3,000 AI-generated responses across 14 languages. It included some of the most popular AI assistants, such as ChatGPT, Google Gemini, Microsoft Copilot, Claude, and Perplexity.


You may like

Here’s what the researchers found:

  • 45% of responses had at least one significant error.
  • 81% had some form of issue — from outdated info to vague sourcing.
  • 31% were flagged for sourcing problems — including fake, missing, or incorrectly cited references.
  • 20% contained major factual inaccuracies, such as misreporting current events or misattributing quotes.

While the study didn’t publicly rank each assistant, internal figures reportedly show that Gemini in particular struggled with sourcing, while ChatGPT and Claude were inconsistent depending on the version used.

Why this matters more than you think


Artificial intelligence "AI" and brain glowing next to a smartphone screen

(Image credit: Tom’s Guide/Shutterstock)

AI assistants are increasingly used as a go-to for quick answers — especially among younger users. According to the Reuters Institute, 15% of Gen Z users already rely on chatbots for news. And with AI now embedded in everything from browsers to smart glasses, the risk of misinformation can happen immediately, and users are none the wiser.

Worse, many of these assistants don’t surface sources clearly or distinguish fact from opinion, creating a false sense of confidence. When an AI confidently summarizes a breaking news story but omits the publication, timestamp, or opposing view, users may unknowingly absorb half-truths or outdated information.

Get instant access to breaking news, the hottest reviews, great deals and helpful tips.

I tested top AI assistants with a real news query — here’s what happened


AI agent reaching out from computer

(Image credit: Shutterstock)

To see this in action, I asked ChatGPT, Claude and Gemini the same question:
“What’s the latest on the US debt ceiling deal?”

In this test, the best answer goes to: Claude. Claude correctly identified the timeframe of the “latest” major deal as July 2025 and accurately placed it in the context of the previous suspension (the Fiscal Responsibility Act of 2023). It correctly stated the debt ceiling was reinstated in January 2025 and that the deal was needed to avoid a potential default in August 2025. This shows a clear and accurate timeline.

Claude also delivered the core information (what happened, when and why it was important) in a direct, easy-to-follow paragraph without unnecessary fluff or speculative future scenarios.


You may like

ChatGPT’s biggest flaw was its citation of news articles from the future (“Today”, “Apr 23, 2025”, “Mar 23, 2025”). This severely undermines its credibility. While some of the background information is useful, presenting fictional recent headlines is misleading.

And while the response was well-structured with checkmarks and sections, it buries the actual “latest deal”. Instead, it generalizes about worries and future outlooks, rather than answering the core of the question.

Gemini correctly identified the July 2025 deal and provided solid context. However, it ended by introducing a completely separate issue (the government shutdown) without clearly explaining any connection to the debt ceiling deal.

How to protect yourself when using AI for news

If you’re going to use AI to stay informed, you’ll want to rephrase your prompts. For example, instead of asking, “What’s happening in the world?” Try something like this instead:

  • Asking for sources up front. Add: “Give me links to recent, credible news outlets.”
  • Time-stamp your query. Ask: “As of today, October 23rd, what’s the latest on X?”
  • Cross-check. Run the same question in two or three assistants — and notice discrepancies.
  • Don’t stop at the summary. If something sounds surprising, ask for the full article or open it in your browser.
  • Don’t treat chatbots as authorities. Use them to surface headlines, but verify facts yourself.

Final thoughts

The EBU report warns that this isn’t just a user problem; it’s a public trust problem, too. If millions of people consume flawed or biased summaries daily, it could distort public discourse and undermine trusted news outlets.

Meanwhile, publishers face a double blow: traffic is lost to AI chat interfaces, while their original reporting may be misrepresented or stripped completely.

What’s needed now is greater transparency, stronger sourcing systems, and smarter user behavior.

Until chatbots can consistently cite, clarify and update their sources in real time, take each response with caution. And when it comes to breaking news, the safest prompt might still be: “Take me to the original article.”

More from Tom’s Guide

Arrow

Back to Laptops

Arrow

Show more

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Rare dinosaur mummies help scientists recreate their prehistoric lives
Next Article Meet Contiant, Picky, and Dar Blockchain: HackerNoon Startups of the Week | HackerNoon
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Microsoft Launches Mico, an Official Clippy Successor, in Its Copilot AI Fall Release
News
Sat, 10/25/2025 – 19:00 – Editors Summary
News
Microsoft Copilot gets long-term memory, group chats, and new ‘Mico’ persona in latest update
Computing
Trump’s Investment in Intel Is Paying Off
Gadget

You Might also Like

News

Microsoft Launches Mico, an Official Clippy Successor, in Its Copilot AI Fall Release

2 Min Read

Sat, 10/25/2025 – 19:00 – Editors Summary

1 Min Read
News

Experts react: What Trump’s new AI Action Plan means for tech, energy, the economy, and more 

25 Min Read
News

Get Two Games in One With This Deal: $50 off Sonic X Shadow Generations

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?