The Perfect Fake Doesn’t Exist: 7 Ways to Spot an AI-Generated Image

Saturday of the Absurd

How do you tell an image was generated by AI? The trap is simple now: the obvious defects are fading, but inconsistencies remain. And that is exactly where the human eye still has an edge.

The image above is a perfect training ground. It shows a believable studio setup, a convincing face, flattering light, and clean framing. In other words, everything needed to create an impression of reality. Then, the moment you slow down, the illusion starts to crack. The red circles help, but the real point is not just to “find the mistake.” The real point is to learn how to look differently. The uploaded image combines credibility and absurdity in the same frame: a realistic office setting, a plausible wall award, but also a goat indoors, an ostrich entering the frame, an oversized foot, and a desk object that is hard to identify clearly.

I have no issue with an image being generated by AI. What bothers me is when someone tries to pass synthetic fiction off as evidence of reality.

For a long time, people repeated the same checklist: look at the fingers, count the teeth, inspect the ears. The problem is that these clues are becoming less reliable. The models are improving fast. Recent reporting even notes that hands alone are no longer enough. The useful clues now are more subtle: overall coherence, physical logic, file provenance, publication context, and sometimes the presence of metadata or technical markers such as Content Credentials or invisible watermarking. Provenance standards such as C2PA and Content Credentials are designed precisely to document the origin and edits of content, even though they do not solve every problem.

First reflex: hunt for coherence, not just detail.

An AI image can succeed on a face and fail on the world around it. The background often gives it away. AP recommends checking whether the shadows, lighting, and setting remain consistent, because the main subject may look sharp while the backdrop turns illogical or suspiciously smooth. The Guardian also points to mismatched patterns, badly connected objects, and weak text rendering as recurring clues in synthetic visuals.

Second reflex: test the physical logic.

In this image, everything looks polished, yet the whole scene slips into the impossible. A goat in a studio. An ostrich peeking into frame. A foot pushed toward the camera with exaggerated perspective. A black object on the desk that looks like something without becoming clearly readable. AI loves to create “almost real” objects. It gives the brain enough signal to recognize a category, but not enough precision to validate it. That is where we have to resist our natural tendency to fill in the gaps. In my book, chapter 6, I explain how easily the brain gets trapped by framing bias and mental shortcuts when a scene feels broadly plausible. That is exactly the weakness AI exploits here.

Third reflex: beware cosmetic perfection.

Experts quoted by AP News often mention unnaturally polished texture, overly smooth skin, and surfaces that look too uniform, as if reality had been sanded down. That smoothing effect is not limited to faces. It can affect walls, fabric, lamps, reflections, and furniture. Reality contains noise, friction, and micro-imperfections. AI has long preferred the “too finished” version of the world, and even though it keeps improving, that aesthetic still remains a useful weak signal.

Fourth reflex: inspect what should be ordinary.

AI is sometimes more convincing with the spectacular than with the mundane. A dramatic portrait, a sci-fi scene, or a heavily stylized image can work better than a simple wall plaque, printed text, a light switch, a seam, or a lamp joint. In this visual, the YouTube plaque looks plausible at first glance, then odd when you stare at it. The same goes for the black object on the desk: the image suggests a function without delivering a crisp reading. When the ordinary turns fuzzy, your alarm should turn on.

Fifth reflex: verify provenance before reacting.

A standalone image, stripped of context, is easier to believe. Where does it come from? Who posted it? Are there other views of the same scene? Is there a credible editorial trail? Initiatives such as Google’s SynthID aim to add detectable invisible watermarks to some AI-generated media. Adobe, Microsoft, and the wider C2PA ecosystem are also pushing Content Credentials to attach a technical history of creation and editing to a file. That does not mean every unmarked image is fake, nor that every marked image becomes automatically trustworthy. It simply means the key question is shifting from “What do I see?” to “What verifiable trace comes with what I see?”

Sixth reflex: accept that detection is becoming probabilistic.

The era of “I know for sure” is ending. NIST is actively evaluating forensic and detection systems for generated and manipulated media, which shows that robust identification is still a serious technical challenge, not a bar-stool guessing game.

Seventh reflex: train your visual culture.

The more AI-generated images you see, the faster you detect their habits. The issue is not only technical. It is cultural. Someone who understands visual storytelling, photography, perspective, materials, lighting, staging, and the limits of generators will notice the seams faster. In my book, chapter 14, I discuss the application of artificial intelligence with a simple idea: a tool can look impressive from a distance; method, judgment, and use are what make the difference.

The most important point may be this: an AI-generated image is not automatically the problem.

The problem starts when its synthetic nature is erased so it can borrow the authority of reality. At that point, we leave the field of creation and enter the field of manipulation.

So yes, in this image, the red circles help. But in real life, you will not get red circles. You will only have your attention, your critical thinking, and your ability to ask a simple question before sharing: “What in this image deserves to be verified?”

More serious posts during the week.
Saturday of the Absurd on… Saturday.
And Sunday of the Strange on… Sunday. You catch on quickly.

References

(AP News) = https://apnews.com/article/one-tech-tip-spotting-deepfakes-ai-8f7403c7e5a738488d74cf2326382d8c
(The Guardian) = https://www.theguardian.com/technology/2024/apr/08/how-to-tell-if-an-image-is-ai-generated
(Google DeepMind) = https://deepmind.google/blog/identifying-ai-generated-images-with-synthid
(Google DeepMind) = https://deepmind.google/models/synthid
(Adobe) = https://helpx.adobe.com/creative-cloud/apps/adobe-content-authenticity/content-credentials/overview.html
(C2PA) = https://c2pa.org
(Microsoft) = https://learn.microsoft.com/en-us/azure/foundry-classic/openai/concepts/content-credentials
(NIST) = https://www.nist.gov/publications/guardians-forensic-evidence-evaluating-analytic-systems-against-ai-generated-deepfakes
(NIST) = https://www.nist.gov/publications/nist-open-media-forensics-challenge-openmfc-briefing-iird
(NIST) = https://www.nist.gov/publications/2025-nist-genai-pilot-evaluation-plan-image-generators

Picture of Philippe Boulanger

Philippe Boulanger

Philippe Boulanger, international speaker on innovation and artificial intelligence, author, advisor, mentor and consultant.

Latest POSTS

Are you a rule breaker?

You weren’t supposed to find this.

But here you are, because you did what most people don’t: you questioned, you explored, you clicked the thing you weren’t sure you should click.

That’s Innovational Intelligence™ in action.

Most people stay inside the lines. Follow the expected path. Click the obvious buttons. Accept things as they are.

Not you.

You’re one of those rare minds that refuses to accept “this is how it’s always been done.”

We need more people who think like you.

So here’s your reward for coloring outside the lines:

Get VIP pre-release access to the next assessment on Innovational Intelligence™:

You’ll be the first to know when it’s available.

Keep breaking rules. The world needs what you see.