AI-generated images aren’t going away anytime soon. In fact, they continue to look increasingly realistic thanks to the likes of Gemini’s advanced Nano Banana Pro image model, among others. You might not be able to immediately tell what’s fake in every instance, but it’s still worth checking for a few telltale signs. Below are the seven easiest ways to spot AI images.
1. Is There a Watermark?
Yes, this is obvious, but it’s not quite as obvious as you might think. At a glance, it’s easy to simply miss a watermark. You may also not be familiar with which watermarks are from AI companies or services. Accordingly, you should always search the corners of an image to look for watermarks, as well as look up any you don’t recognize.
Apartment generated by Gemini (Credit: Google/PCMag)
It’s entirely possible, if not trivially easy, to remove watermarks from images, but some AI images have additional markers. For instance, Google uses its SynthID tool to add what amounts to an invisible watermark to Gemini images. Even if you can’t see a marker on an image, you can still submit it to Gemini and ask it to check for SynthID. Of course, a negative result doesn’t rule out the possibility that the image came from a different tool.
2. Is There Easily Identifiable Source Material?
If a still from a movie, for example, is going viral, all it takes to check if it’s AI-generated is doing a Google image search. If it’s an actual still from a movie, chances are that you’ll find it posted somewhere where people are talking about the movie. On the other hand, if all you find are articles talking about the image in question tricking people, well, it’s likely fake.
All of this is to say that a simple image search of something you suspect might be AI-generated will oftentimes give you enough information to make that determination. Of course, this isn’t always the case, especially when an image in question hasn’t gone viral or originates from a random account you follow on social media. Nonetheless, information on the source of an image (or the lack thereof) can be incredibly valuable in assessing its authenticity.
3. Is the Text Distorted?
AI image generators can struggle to generate text, usually producing blurry, distorted, or nonsensical words. If an image has a ton of text in it, particularly with lots of tiny characters throughout, it’s likely not AI-generated. Most AI image models just can’t do that.
But this isn’t a smoking gun, either, as some models can do a reasonable job. Gemini’s Nano Banana Pro, for example, can effortlessly generate legible text in seconds and at no cost. That said, even Nano Banana Pro isn’t perfect. It struggles to generate lots of photorealistic text across a single image, such as of an open book. For that reason, I recommend you pay close attention to every bit of text in an image.
Book generated by Gemini (Credit: Google/PCMag)
4. Is Something Off?
Take a look at the quiz below. We didn’t use the very latest image generation technology, in part, so you can see that the fake photos look just slightly off. Sometimes, that means they’re too clean or smooth. Other times, they have a generic, out-of-focus background that lacks proper detail. Admittedly, spotting inconsistencies in AI-generated images takes some level of practice, but you can regularly find them if you look closely.
As an example, I generated two images using the cutting-edge Nano Banana Pro: an owl drawn by Picasso and a candid shot of Mark Zuckerberg. Compare these to their entries in the test above. Even though I used more advanced tech to generate them, the owl clearly looks too perfect in some areas and too haphazard in others. Zuckerberg’s AI-generated photo has the same generic, out-of-focus background common in AI-generated images, and something still looks off.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
5. Is the Context Believable?
First, consider where an image comes from and who posted it. For example, say you find a surprising image on Reddit. But when you click into the poster’s profile, you might find them active on a bunch of AI subreddits. That’s a good indication that it’s fake, even if they didn’t post the image in question to an AI subreddit.
Context within an image is equally important: An image can look real, but does it actually make sense? For example, AI-generated images of a living room might feature a strange layout that no real person would design. Or an AI-generated image of a kitchen might have two sinks. Consider also if an image depicts a famous person, a stranger, or someone you know doing something completely out of character or nonsensical. If an image doesn’t make sense, there’s a good chance that it’s not real.
Recommended by Our Editors
Apartment generated by Copilot (Credit: Microsoft/PCMag)
6. Is It Compressed and Low-Resolution?
The actual specifications of an image can be another giveaway. AI-generated images are almost always compressed and of relatively low resolution. So, for example, if you get your hands on a RAW file, there’s little chance it’s AI-generated. On the other hand, if you find a suspicious 720p JPEG, that’s firmly in the wheelhouse of AI image generators.
(Credit: Ruben Circelli)
Think of this as a sliding scale: the lower the compression and the higher the resolution, the less likely an image is to be generated by AI. AI companies try to save on server space whenever possible. Naturally, this isn’t a hard and fast rule, though, considering that the latest AI image generators, such as Nano Banana Pro, can generate 4K images.
7. Does It Fail an AI Image Detection Test?
Numerous websites claim to be able to analyze images and determine whether they are generated by AI. Your mileage will vary depending on the particular service you use and the image you’re testing, but these tools do work. Oftentimes, these apps give you a percentage chance of something being AI-generated, so you’ll have a rough idea of the confidence level at play in the analysis, too. To be clear, these sites get things wrong sometimes, so you shouldn’t trust them blindly.
Uploaded photo to AI or Not (Credit: AI or Not/PCMag)
Watch Out for Multiple Red Flags
None of the above signs is, by itself, enough to say with certainty that something is AI-generated, unless you find an original source. However, if an image fails an AI image test and looks slightly off, it likely isn’t real. An instance of multiple red flags in a single image is strong evidence. However, no advice is foolproof, so I recommend assuming that everything you see on the internet is either fake or untrue, unless you can find the same image or information from another source that you actually trust.
Editors’ Note: Chandra Steele contributed to this story.
About Our Expert
Ruben Circelli
Writer, Software
Experience
I’ve been writing about consumer technology and video games for over a decade at a variety of publications, including Destructoid, GamesRadar+, Lifewire, PCGamesN, Trusted Reviews, and What Hi-Fi?, among many others. At PCMag, I review AI and productivity software—everything from chatbots to to-do list apps. In my free time, I’m likely cooking something, playing a game, or tinkering with my computer.
Read Full Bio
