Since the dawn of SEO, Google has told us that if we build high-quality content, clicks will come. They’ve developed frameworks like EEAT and rolled out countless core updates, many with the goal of rooting out low-quality content and rewarding sites that “do it right.”
So why, in the year of our lord two-thousand and twenty-six, do we still see so many out-of-date, copycat, and generally low-quality pages get high-profile positions in AI Overviews and organic rankings?
Let’s zoom in a bit here to compare Google’s recent advice on getting visibility in search and the realities of what we’re seeing in the wild.
Contents
What Google says about search visibility in 2026
Google’s business model relies on publishers creating new, high-quality content. That’s even more true with AI answers in the mix, since they generate responses based solely on what the LLM learns from other websites. It’s how Google delivers a good product to its users.
Because of that, Google has consistently beat the drum that to get found on its SERP, you just need to focus on creating high-quality content designed for humans, not AI or algorithms.
Here’s Google’s Danny Sullivan in a recent talk on the topic:
“And when it comes to all of our ranking systems, it’s about how are we trying to reward content that we think is great for people, that it was written for human beings in mind, not written for search algorithms, not written for LLMs, not written for LMNO, PEO, whatever you want to call it.”
What it means to create content that’s “great for people” is a huge can of worms. But some of the terms we see mentioned by Google employees and in Google documentation are:
Google sings the same refrain about getting cited in its AI Overviews and AI Mode. In a podcast last year, Rick Fox, SVP of Knowledge and Information at Google, was asked to give guidance to publishers who wanted to be noticed by AI:
“The short answer is what you would have built and the way to optimize to do well in Google’s AI experiences is very similar, I would say the same, as how to perform well in traditional search. And it really does come down to build a great site, build great content. The way we put it is: build for users. Build what you would want to read, what you would want to access.”
It sounds simple. Serve your audience first and get found on Google. But as much as Google needs publishers to keep pushing out actually good posts and guides, it’s not always rewarding the ones that do.
⏸️ A quick pause…If you’d like to see how to rank on search in 2026, check out How to DO SEO Right—Right Now!
Google still rewards low-quality sites and old content
Let’s be honest, it stings that AIOs are soaking up so many of the clicks that used to go to content publishers. It’s salt in the wound when the value that’s left over is given to a site that’s either actively or passively not meeting the quality standards Google swears it adheres to (especially when you do).
AIOs are the wild wild west
To be fair, the incident that kicked off this review felt a little personal. I searched “What’s the average cost of Google Ads?” I expected to see our Search Ads Benchmarks report. Our team, especially our Senior Content Specialist Susie Marino, works really hard on that report, and it:
- Is based on unique data you can’t get anywhere else
- Is logically organized with all the proper subheadings, etc.
- Includes cited quotes and insights from actual PPC experts
Basically, it has all the ingredients Google says to add for visibility.
Our benchmark reports are based on new and historic data you can’t get anywhere else.
The AI answer for that term was filled with stats that looked like they were scraped directly from our report. What it didn’t include was a citation to our website. Instead, it linked to a blog post that used our data secondhand.
Why did this work? Most likely, it’s because the cited post played the GEO game by using long, natural-language questions in the H2s and very direct answers immediately after.
It’s a clunky reading experience for humans. But it’s catnip for a system searching for the easiest way to predict a string of words to answer a question.
To be clear, I’m not hating the player here (although a link or mention of where their data came from would be nice). It’s the game that’s frustrating. If Google’s AI is choosing content that was “unique and valuable,” had “clear sourcing,” and highlighted “evidence of the expertise involved,” how could it position these posts over the reports with the original data?
I’m not the only one noticing this trend. SEO expert Lily Ray recently noted that “pay-to-play” content is highly successful in AI Overviews, even as Google has worked to demote this kind of content elsewhere.
Source
Here’s another example that’s less about content quality and more about recency. Receny in content in relative. If you’re searching for studies that document human psychology, an eight-year-old paper might be just fine since those projects can cover a decade of research (and the human brain hasn’t changed a lot in the last 30 years).
But when you’re working in a dynamic field like online sales, where every year is a new epoch, stale data is mostly worthless.
Have a look at the SERP for the query “What’s a good conversion rate on Etsy.”
The first and second AIO citations are from social media posts. That’s not surprising, since real-life experience provides an authoritative answer to this question.
What stands out is the third citation, which is the first blog post on the list. No shade to the content itself—the author gathered external data and shared her own experience of running multiple Etsy shops.
The problem is that it was published in 2021. Etsy is a very popular place to sell, and the results you get there will change dramatically as shopper behavior, the economy, and online platforms change. It’s hard to imagine there isn’t more recent data than this.
This post is also among the top results in the organic links, showing the issue isn’t just about AI. Which brings us to the next point.
💡 How are other businesses dealing with all these changes? Find out what 300 businesses have to say in The Big Small Business Website Trends Report: SEO, GEO, & the Future of Traffic
Organic search results still have problems
In all fairness to Google, they’ve had a lot less time to sort out their AI content recommendation systems. But that also means their algorithms, which have had decades’ worth of updates, should be pretty stellar.
Yet, take a look at some of the blog content on the website that won the citation in the Google Ads cost AI Overview.
You’ll see pages filled with AI-generated posts that simply rank the business as the best option in many locations and for many industries. This can’t be what Google considers helpful, human-centered content.
What’s really surprising is just how quickly this website is gaining traffic and organic keyword rankings. Especially as highly trusted sites with years of authoritative content suffer in the post-core update volatility.
Note that this website has a domain rating of 30, and it’s beating dozens of well-established publishers for high-intent keywords.
Here’s another example that’s near and dear to my heart. This post showed up in the results for the term “Content marketing trends 2026.”
The entire post was around 400 words. There were no expert quotes, no data to support the claims, and not even a screenshot as an example. Yet it made it to the front page for a while. Luckily, it’s gone now (crossing fingers our content trends guide keeps climbing), but it should never have landed there in the first place if attributes like lack of authority and repetitive insights are being filtered out.
HyHoang…I hear you.
Source
What publishers should do
The hope is that Google will adapt its AI Overviews to cite original, helpful content. But it still hasn’t completely figured out its organic ranking algorithm. Case in point, the latest core update has deeply penalized some of the most authoritative news sites on the planet.
That may lead many marketers and content creators to lean into hacky tactics like the ones in our examples. That is a short-term strategy that will ideally fail once Google figures out they’re happening.
I believe the best strategy is to keep digging up new data, mining internal and external subject matter experts for fresh wisdom, and organizing it in a way that makes the most sense for the people who’ll read it. After all, introducing a bad experience to more people seems like a very efficient way to ruin your brand. Let’s just hope Google rewards those of us who do it.
