AI chatbots are now responsible for an increasingly unfathomable amount of internet content, including written text. Here’s the problem: Text is one of the simplest things for chatbots to generate and often harder to pick out as fake compared with AI images or videos (especially if you engineer your prompts just right). Still, it’s not impossible to catch AI-generated text. I spend a lot of time evaluating chatbots, so I’ve compiled seven common giveaways of AI-generated text I’ve encountered.
1. Unnecessary Encouragement or Enthusiasm
AI chatbots tell you what you want to hear. So, if your prompt implies a certain point of view, the response you get will affirm that belief. This sort of structure in writing is the biggest tell of AI-generated text, especially if the actual context of the text in question doesn’t quite justify the level of encouragement or enthusiasm you see.
For example, I recently developed a skincare routine with AI. When I send a description of my routine to ChatGPT, it tells me, “Overall verdict first: Your routine is extremely well thought-out and mostly solid.” (Yes, the bold text is part of the chatbot’s response.) ChatGPT then continues with its first major header, “What You Did Very Well,” which begins, “Honestly, you hit the three pillars of skin aging prevention almost perfectly.”
2. Overly Perfect Grammar and Punctuation
If you read a lot, you understand the concept of creative (or artistic) license. This is the idea that writers break grammatical conventions or rules for effect, especially in fiction or poetry. For example, a writer might use a run-on sentence for emphasis. Or a sentence fragment. Furthermore, most people just don’t have an encyclopedic knowledge of grammatical rules, so they will make mistakes. If you look closely, you can find many instances of extra (or missing) commas in everyday writing. AI chatbots, on the other hand, rarely make those types of mistakes, unless you explicitly ask them to.
In testing, I asked ChatGPT for a free verse poem, which means it has absolute permission to violate every grammatical rule under the sun. Nonetheless, much of my poem simply reads like grammatically correct sentences with line breaks. On the other hand, take a look at a free verse poem by E.E. Cummings, none of which reads like normal prose. Of course, not every free verse poem is as experimental as Cummings’ poetry, and AI chatbots don’t have perfect grammar in every single instance. However, if you notice a long Reddit post doesn’t have so much as a single grammatical issue, for example, that’s a red flag.
3. Routine Inclusion of Em Dashes
You use an em dash when you want to break up a sentence for emphasis, as opposed to the shorter en dash you use to describe a range like 5-10. No—there’s nothing necessarily grammatically wrong with using em dashes, but chatbots historically incorporate these into their responses far more often than the average person.
(Credit: OpenAI/PCMag)
When I asked ChatGPT if I could recycle my spent water filters at one point, I got the following response: “Short answer: No—don’t put ZeroWater filters in your normal recycling bin.” Now, seeing a single em dash doesn’t mean something is AI-generated (I like using em dashes sometimes), but if you see a lot of them, that’s a potential giveaway.
4. Repetitive Phrases or Words
AI-generated text isn’t new, so there are well-known phrases and words AI tends to use. You can simply pay attention to your conversations with chatbots to develop a sense of their favorite sayings or consult a list, of which there are many.
Some examples of repetitive AI words are “harness,” “illuminate,” “pivotal,” “realm,” and “underscore,” while examples of popular phrases are “at its core,” “delve into,” “that being said,” “to put it simply,” and more. Of course, these are pretty normal words and phrases, so seeing one doesn’t mean anything. But if you see many of these across a single article or post, that’s a sign that what you’re reading might be from a bot.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
5. A Heavy Reliance on Lists, Headings, and Structure
Ask ChatGPT, or any other chatbot, a question, and the response you get almost certainly includes bullet points, different headings, and a summary or conclusion section, alongside other similar formatting elements. That kind of structure makes sense in an article like this one, but it feels a lot more out of place in other contexts.
(Credit: OpenAI/PCMag)
As an example, if I ask ChatGPT if Pokémon Pokopia is worth buying, it begins with a short answer, then sets up the following headings (each with bullet points within) in order: “What Pokémon Pokopia Actually Is,” “Reception (So Far),” and “When It’s Worth Buying.” It then caps off its response with a brief conclusion section. This is a very different kind of writing than you’d get if you asked a friend or watched a video that covers the same topic.
6. A Lack of Unique Content
AI chatbots run on large language models (LLMs) that are trained on vast amounts of preexisting data. Oftentimes, chatbots also search the internet for up-to-date information to inform their replies. As a result, when an AI tells you something, it’s regurgitating what other people already said, not forming an opinion itself.
If you’re reading a blog post reviewing Pokémon Pokopia, but it covers only the same generic information you get when talking to ChatGPT about the game (which itself compiles from various reviews and previews of the game), that can be a red flag. But if the blog post has personal anecdotes or thoughts on the game, it’s less likely to be AI.
Recommended by Our Editors
If you don’t want to put the work in yourself, that’s no problem, because a ton of online tools can analyze text and tell you how likely it is to be AI-generated. For example, I submitted the poem I generated with ChatGPT above to Copyleaks, an AI detection tool. It told me there was a 100% chance my poem was AI-generated.
(Credit: Copyleaks/PCMag)
However, keep in mind that these tools aren’t perfect. I ran the same poem through ZeroGPT and QuillBot’s AI detector, and both failed to detect that it was generated by a bot. If you are going to rely on these types of tools, I recommend using a few different ones to cross-check their results. Even then, you can’t fully trust these services.
Look for Patterns, Not Any One Thing
Every single attribute I identified above appears in legitimate writing from real people every day. Some people love grammatical perfection or highly structured writing, after all. However, if some writing online does most or even all of the things I call out, there’s a strong chance that a human didn’t write at least parts of it.
Don’t try to avoid the above characteristics when you write something, though. Not only is it likely going to sound awkward, but it’s just unnecessary. It’s surprisingly difficult to replicate the feel of AI-generated text, and the telltale signs are changing all the time. When you write, just focus on being the best writer you can possibly be.
As with spotting AI images or videos, spotting AI text is a skill that requires practice. Furthermore, AI models are constantly improving, so clocking AI-generated content will become increasingly difficult. Don’t sweat it if you mistake something fake for something real, as that’s just an inevitability in the AI age. I’ll continue to update this article as new trends emerge, too.
About Our Expert
Ruben Circelli
Writer, Software
Experience
I’ve been writing about consumer technology and video games for over a decade at a variety of publications, including Destructoid, GamesRadar+, Lifewire, PCGamesN, Trusted Reviews, and What Hi-Fi?, among many others. At PCMag, I review AI and productivity software—everything from chatbots to to-do list apps. In my free time, I’m likely cooking something, playing a game, or tinkering with my computer.
Read Full Bio
