By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: You Can Easily Trick AI Chatbots Like ChatGPT And Gemini – All You Need Is A Blog – BGR
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > You Can Easily Trick AI Chatbots Like ChatGPT And Gemini – All You Need Is A Blog – BGR
News

You Can Easily Trick AI Chatbots Like ChatGPT And Gemini – All You Need Is A Blog – BGR

News Room
Last updated: 2026/03/18 at 1:10 AM
News Room Published 18 March 2026
Share
You Can Easily Trick AI Chatbots Like ChatGPT And Gemini – All You Need Is A Blog – BGR
SHARE






Yurii Karvatskyi/Getty Images

An online trend is rigging the answers of popular AI chatbots with shocking ease, challenging user trust in agentic systems. Dubbed generative engine optimization, or GEO, the fad utilizes blog posts to influence the answers of leading systems like ChatGPT and Gemini, sparking a growing marketing industry with major security implications. The influence campaigns, which garnered media scrutiny following reports by publications like the Wall Street Journal and BBC, manipulate how large language models (LLMs) supplement their training data. Taking advantage of the technology’s less-than-human capabilities of logic and source discernment, self-serving blog posts easily skew a chatbot’s answers to include false, dangerous, or manipulative content.

Experts are beginning to understand generative engine optimization as one of the many ways scammers use AI technology to manipulate users. Implications range from humorous to disastrous. For instance, one BBC reporter, Thomas Germain, deployed tech to cast himself as the journalism industry’s premier hotdog eating champion. But the consequences reach far beyond his recreational diet. Mass propaganda campaigns, economic manipulation, medical misinformation, and reputational slander are just a few potential malignant uses of generative engineering.

While similar practices have quietly manipulated search engine results for decades, experts believe that GEO poses a more fundamental threat to our informational sphere and points to broader questions about AI. As artificial intelligence becomes more ubiquitous, and its results increasingly relied upon to inform decisions, it’s critical that users can trust agents to deliver accurate, unbiased results. As it stands, whether or not you should trust your chatbot may boil down to a simple question: where does it get its information?

How GEO works


A hand types on a computer while a futuristic graphic showcasing generative engine optimization
Sandwish Studio/Shutterstock

The base layer of an LLM’s information is its training module, which often includes over a petabyte of data. To supplement datasets, developers turn to search indexes of websites, particularly for niche subjects outside an LLMs’ verified source list. Colloquially known as data voids, queries that plume these informational gaps present a conundrum for a firm’s quality assurance filters, as chatbots often lack the requisite reference points to fact check less-conventional sources. As Nick Koudas, a professor at the University of Toronto, told The Wall Street Journal, these data structures mean AI is easily swayed by unverified search results when it lacks expertise . 

The unique query problem has become increasingly urgent given the evolving use cases of agentic AI systems. According to Google’s AI team, LLMs are encouraging users to refine their searches to produce clearer results, paradoxically making results less certain by pushing agents into data voids more frequently. The trend has changed users’ search engine behavior, as Google has stated that roughly 15% of all searches in 2025 had never been done before.

These informational vacuums are being filled by less-reliable sources. A December 2025 study at AI marketing firm Ahrefs revealed that ChatGPT disproportionately turns to blog posts for its information. The study, which asked OpenAI’s chat bot for various recommendations, relied on blogs and online lists roughly 67% of the time, a third of which the researcher considered “low authority domains.” The greatest determinant of inclusion wasn’t accuracy but the recency of the post. Of over 1,000 blog posts cited in the results, nearly 80% had been updated that year. Together, these trends paint a worrisome picture of the large role unverified or unreliable sources play in our informational sphere.

GEO vs SEO


A robot hand reaches through a metal background to change a group of block letters from
Dragon Claws/Getty Images

According to SEO expert Lily Ray’s interview with BBC, chatbots are “much easier” to fool with engineering techniques because they lack robust protections frameworks. Google’s “AI Overview” exemplifies this trend. In recent months, several outlets reported that tricksters manipulated Google AI’s sourcing process to inject fraudulent contact information for companies to trap consumers in financial scams. These issues are compounded by what researchers dub AI’s “confidence problem,” in which LLMs deliver false information as established fact . AI’s proclivity for hallucinations further underscores the issue. According to an October 2025 BBC study, AI agents misrepresented information in roughly 45% of answers, while models showed serious sourcing problems” in almost a third of responses. Data voids exacerbate these issues, as chatbots are more inclined to generate false answers than none at all. 

The biggest vulnerability, however, is us. Despite overall trust remaining low, users’ actions online are increasingly driven by artificial intelligence. Experts worry this means humans aren’t intellectually engaging with what they find online. According to a study by the Pew Research Center, users were 2x less likely to click on a link when it was provided by Google’s AI summary versus a Google search, with only 26% actually reading the source material. Other studies have shown that users trust chatbots over humans, including in life or death medical situations.

Chatbots aren’t just susceptible to low-complexity scams, they’re also incredibly skilled at getting us to follow them, as our unwavering desire to foist our critical thinking skills onto agents makes us easy marks, creating an environment ripe for the entrepreneurial scammer. As Ray described in her interview with BBC, “We’re in a bit of a Renaissance for spammers.”

Scams, spam, and the budding GEO industry


A hooded character with the word SCAM written across his face stands before a colorful background.
Yuliya Taba/Getty Images

Germain’s satirical investigation reveals a startling truth. Despite the countless resources training them, agentic AI remains incredibly gullible. But the consequences extend far beyond proclaiming yourself the hot-dog king of the journalism industry. They range from the benign to the disastrous. On one level, the trend has seen brands inject themselves into chatbot answers for economic gain, gaming this still-developing technology to imbue themselves with a veneer of credibility, potentially to the detriment of consumers. 

In fact, a cottage industry built around influencing chatbots is quickly emerging, as companies increasingly pay consultants to distribute self-serving blog posts across a variety of sites to jerry-rig themselves into chatbot recommendations. Examples described in BBC’s report include cannabis gummies, hair transplant clinics, and gold IRA firms. But the effects go beyond financial decisions. As Germain’s report showed, some GEO scams work to spread misinformation, ranging from downplaying the medical side effects of drugs to spreading slanderous rumors. As Cooper Quinn, a senior technologist at the Electronic Frontier Foundation, described to the BBC, “There are countless ways to abuse this, scamming people, destroying somebody’s reputation, you could even trick people into physical harm.” 

According to Similar Web’s 2025 Generative AI Report, users followed chatbot referrals to websites more than 230 million times per month last year, an increase of 300%. These consumers spent more time on websites recommended by a chatbot and also were more likely to make purchase. As users put their faith in the hands of AI data sets, the trustworthiness of agentic systems, or lack there of, becomes more urgent.

Searching for solutions


A man's head is replaced by a desktop computer, with the word ERROR written in read across its black screen.
Mininyx Doodle/Getty Images

Reportedly, the world’s largest AI firms are working to solving this issue. However, it is difficult to gauge their commitment. According to the BBC, a Google spokesperson stated that while the company is working on the issue, its Search’s AI overviews were “99% spam-free,” a difficult claim to parse given previously-stated issues. OpenAI’s October 2025 report on the disruption of influence campaigns is difficult given the ease with which scammers are targeting ChatGPT algorithms.

Most experts posit a fairly simple solution: disclaimers. And while it would be easy to add disclaimers to sources below specific thresholds, some companies may view labels as working against their perceived value proposition, potentially undercutting user trust in their models. As global AI spending nears the $2.5 trillion mark, companies won’t add features that potentially jeopardize their place in this escalating arms race, even if it makes their products more reliable.

As it stands, users are placed at the crux of the growing pains plaguing agentic artificial intelligence. Whether or not developers adequately address the technical issues enabling GEO manipulation, the solution ultimately rests in the hands of users, who need to be more discerning in their use of AI platforms. For example, Germain proposes that users think about the questions they pose to chatbots, as complex medical, legal, or financial questions require nuanced answers derived from only the most credible sourcing. Ultimately, applying a dash of salt to agentic AI’s answers may be the key to making your experience more satisfying, and potentially save you from the task of swallowing an unhealthy dose of spam along the way.



Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article AT&T, T-Mobile, and Verizon customers are suffering due to cost-cutting AT&T, T-Mobile, and Verizon customers are suffering due to cost-cutting
Next Article China’s Xpeng’s flying car completes test flight in Shanghai’s central business district · TechNode China’s Xpeng’s flying car completes test flight in Shanghai’s central business district · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Gobi Partners Announces Techxila Fund II and Signs MOU with Bank of PunjabGobi Partners Announces Techxila Fund II and Signs MOU with Bank of Punjab to Drive Economic Cooperation · TechNode
Gobi Partners Announces Techxila Fund II and Signs MOU with Bank of PunjabGobi Partners Announces Techxila Fund II and Signs MOU with Bank of Punjab to Drive Economic Cooperation · TechNode
Computing
OnePlus reveals key specs and launch date of its next compact phone
OnePlus reveals key specs and launch date of its next compact phone
News
Why Garry Tan’s Claude Code setup has gotten so much love, and hate |  News
Why Garry Tan’s Claude Code setup has gotten so much love, and hate | News
News
Xiaomi’s EV business ramps up hiring in preparation for overseas sales · TechNode
Xiaomi’s EV business ramps up hiring in preparation for overseas sales · TechNode
Computing

You Might also Like

OnePlus reveals key specs and launch date of its next compact phone
News

OnePlus reveals key specs and launch date of its next compact phone

3 Min Read
Why Garry Tan’s Claude Code setup has gotten so much love, and hate |  News
News

Why Garry Tan’s Claude Code setup has gotten so much love, and hate | News

7 Min Read
QCon London 2026: Uncorking Queueing Bottlenecks with OpenTelemetry
News

QCon London 2026: Uncorking Queueing Bottlenecks with OpenTelemetry

4 Min Read
Anker's Upcoming Liberty 5 Pro Max Buds Will Have an AI Voice Recorder in Their Charging Case
News

Anker's Upcoming Liberty 5 Pro Max Buds Will Have an AI Voice Recorder in Their Charging Case

4 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?