AI is more deeply embedded in our daily lives than ever before. It’s blending seamlessly into how we work, search and stay informed. But a new study from the European Broadcasting Union (EBU) issues a stark warning: 45% of AI-generated news responses contain serious errors, and 81% have at least one issue. This could range from outdated information, misleading phrasing, to missing or fabricated sources.
We’ve previously reported that ChatGPT is wrong about 25% of the time. But this new data is even more alarming, especially as tools like ChatGPT Atlas and Google’s AI Overviews are becoming the default way many of us check the news. It’s a reminder that while the convenience is real, so is the risk.
The study: AI assistants fail the accuracy test
The EBU study tested more than 3,000 AI-generated responses across 14 languages. It included some of the most popular AI assistants, such as ChatGPT, Google Gemini, Microsoft Copilot, Claude, and Perplexity.
Here’s what the researchers found:
- 45% of responses had at least one significant error.
- 81% had some form of issue — from outdated info to vague sourcing.
- 31% were flagged for sourcing problems — including fake, missing, or incorrectly cited references.
- 20% contained major factual inaccuracies, such as misreporting current events or misattributing quotes.
While the study didn’t publicly rank each assistant, internal figures reportedly show that Gemini in particular struggled with sourcing, while ChatGPT and Claude were inconsistent depending on the version used.
Why this matters more than you think
AI assistants are increasingly used as a go-to for quick answers — especially among younger users. According to the Reuters Institute, 15% of Gen Z users already rely on chatbots for news. And with AI now embedded in everything from browsers to smart glasses, the risk of misinformation can happen immediately, and users are none the wiser.
Worse, many of these assistants don’t surface sources clearly or distinguish fact from opinion, creating a false sense of confidence. When an AI confidently summarizes a breaking news story but omits the publication, timestamp, or opposing view, users may unknowingly absorb half-truths or outdated information.
I tested top AI assistants with a real news query — here’s what happened
To see this in action, I asked ChatGPT, Claude and Gemini the same question:
“What’s the latest on the US debt ceiling deal?”
In this test, the best answer goes to: Claude. Claude correctly identified the timeframe of the “latest” major deal as July 2025 and accurately placed it in the context of the previous suspension (the Fiscal Responsibility Act of 2023). It correctly stated the debt ceiling was reinstated in January 2025 and that the deal was needed to avoid a potential default in August 2025. This shows a clear and accurate timeline.
Claude also delivered the core information (what happened, when and why it was important) in a direct, easy-to-follow paragraph without unnecessary fluff or speculative future scenarios.
ChatGPT’s biggest flaw was its citation of news articles from the future (“Today”, “Apr 23, 2025”, “Mar 23, 2025”). This severely undermines its credibility. While some of the background information is useful, presenting fictional recent headlines is misleading.
And while the response was well-structured with checkmarks and sections, it buries the actual “latest deal”. Instead, it generalizes about worries and future outlooks, rather than answering the core of the question.
Gemini correctly identified the July 2025 deal and provided solid context. However, it ended by introducing a completely separate issue (the government shutdown) without clearly explaining any connection to the debt ceiling deal.
How to protect yourself when using AI for news
If you’re going to use AI to stay informed, you’ll want to rephrase your prompts. For example, instead of asking, “What’s happening in the world?” Try something like this instead:
- Asking for sources up front. Add: “Give me links to recent, credible news outlets.”
- Time-stamp your query. Ask: “As of today, October 23rd, what’s the latest on X?”
- Cross-check. Run the same question in two or three assistants — and notice discrepancies.
- Don’t stop at the summary. If something sounds surprising, ask for the full article or open it in your browser.
- Don’t treat chatbots as authorities. Use them to surface headlines, but verify facts yourself.
Final thoughts
The EBU report warns that this isn’t just a user problem; it’s a public trust problem, too. If millions of people consume flawed or biased summaries daily, it could distort public discourse and undermine trusted news outlets.
Meanwhile, publishers face a double blow: traffic is lost to AI chat interfaces, while their original reporting may be misrepresented or stripped completely.
What’s needed now is greater transparency, stronger sourcing systems, and smarter user behavior.
Until chatbots can consistently cite, clarify and update their sources in real time, take each response with caution. And when it comes to breaking news, the safest prompt might still be: “Take me to the original article.”
More from Tom’s Guide
Back to Laptops