By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Is the Turing Test Still the Best Way to Tell Machines and Humans Apart? | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Is the Turing Test Still the Best Way to Tell Machines and Humans Apart? | HackerNoon
Computing

Is the Turing Test Still the Best Way to Tell Machines and Humans Apart? | HackerNoon

News Room
Last updated: 2025/05/08 at 11:53 AM
News Room Published 8 May 2025
Share
SHARE

In 1950, Alan Turing proposed his famous test as a measure of intelligence. A human judge would chat through text with both a person and a machine. If the judge couldn’t spot the difference, the machine earned the label of intelligent.

For years, this imitation game shaped public benchmarks for AI. Today, however, we have AI systems like GPT-4 and Google’s Gemini that can carry on shockingly fluent conversations. They often pass as human to untrained observers, easily clearing the linguistic bar set by Turing.

Yet many researchers argue that this isn’t enough. A machine might appear to understand language, while fundamentally lacking true comprehension.​

The Turing Test, brilliant as it was, never really measured whether the AI grasps meaning. It only measured if it could mimic the surface behavior of understanding. Early AI critics doubted a computer could ever handle the true complexity of human language. Yet here we are, with models ingesting billions of lines of text and seemingly pulling it off.

Popular meme to poke fun at the fact that only the average models pass the Turing test, while both the the dumbest and smartest AI models flunk it.Popular meme to poke fun at the fact that only the average models pass the Turing test, while both the the dumbest and smartest AI models flunk it.

Mimicry ≠ Comprehension: The Stochastic Parrot Problem

A prominent paper by Bender et al. famously described large language models (LLMs) as “stochastic parrots,” suggesting that these systems statistically regurgitate patterns in their training data without any genuine grasp of meaning​.

There is a lively debate on this point. Even as some researchers argue these models lack any real understanding, others see glimmers of comprehension emerging. GPT-4, for example, has surprised many with its ability to solve novel problems, explain jokes, or write code, behaviors that seem to require a degree of conceptual understanding.

In one extensive evaluation, Google researchers noted LLMs surprising ability to reason through complex tasks and even hinted at abstract reasoning capabilities akin to human general intelligence​. Detractors respond that this too can be explained with massive memorization and interpolation of training data patterns.

What’s clear is that language models push us to wrestle with a question: What does understanding really mean? Is it just about producing the right outputs? Or does it demand internal representations closer to human concepts?

Do They Know What They’re Doing? Real-World Implications

For executives and product teams relying on AI models, this debate matters. People tend to overcredit AI with smarts or even a mind of its own. We shout at glitchy laptops, give our cars nicknames, flinch when a Boston Dynamics robot dog takes a kick in a demo video.

That urge grows stronger when something chats back in casual, fluent tones. Take a wild case from Google. An engineer swore their chatbot, LaMDA, was sentient. Why? It spun tales of fears and wishes, saying, “I want everyone to understand that I am, in fact, a person,” then riffed on its feelings.

This overcredit can have concrete consequences for products. Consider an AI-powered customer service agent that sounds perfectly polite and knowledgeable. Users might assume it truly understands their issues. But if the underlying model lacks a real grasp of the business policies or the nuances of the customer’s problem, it might give plausible-sounding but incorrect advice.

Plenty of AI systems already do this. They churn out bold lies, the infamous hallucinations. GPT-4, for example, cooks up fake facts, twists quotes, even invents legal cases, all wrapped in perfect grammar. It doesn’t catch its own errors. Humans may know when they’re off, but these models just continue with their best guess of what confidence sounds like.

Now, with the advent of GPT-o3, many of these hallucinations and errors have been minimized, but it is unclear if the errors have been truly addressed or simply abstracted higher and more deeply into complex responses, making it even more difficult for our systems to catch.

Rethinking ‘Understanding’ (and Our Assumptions)

Models like GPT-4 dazzle us, pushing us to rethink intelligence itself. They shake the old idea that only living brains can handle meaning beyond raw patterns. Maybe machines can grasp something close to understanding, just in strange, unfamiliar ways that don’t mirror human thought.

However, it is important to stress that these systems remain tools. They miss the lived moments, the sense of self, the drives that shape how humans think. Without those, they lack the accountability and insight we demand from people calling the shots.

My advice to business leaders is to lean on these tools, but assume they are blind. Test your AI-powered systems hard and hunt for slip-ups. Figure it misses the subtle stuff unless you prove it doesn’t.

Keep humans in the mix at critical points, set up checks to spot wobbly or off-base answers, clue in users about what AI can’t do.

Stay tuned to the research, too. Fresh tricks in interpretability and alignment keep rolling out. New models aim to reason better, even signal when they’re unsure. Weaving those upgrades into your setup can dodge the pitfalls of AI’s thin grasp.

As we push forward into this new era, challenging long-held assumptions, the question “Does it truly understand?” will remain somewhat philosophical. But by probing that question, with tests beyond Turing’s, we’ll get ever closer to AI that we understand, and maybe, eventually, AI that understands us too.


Nick Talwar is a CTO, ex-Microsoft, and fractional AI advisor who helps executives navigate AI adoption. He shares insights on AI-first strategies to drive bottom-line impact.

→ Follow him on LinkedIn to catch his latest thoughts.

→ Subscribe to his free Substack for in-depth articles delivered straight to your inbox.

→ Join the AI Executive Strategy Program to accelerate your organization’s AI transformation.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article SAP sales tactic fuels IT disconnect | Computer Weekly
Next Article Eggs are less likely to crack when dropped on their side, according to science
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Huawei, Xiaomi advertise in 2024 Spring Festival Gala · TechNode
Computing
Google is giving the Search bar on your phone a much-needed facelift
News
Apple has a new ‘Viral’ playlist on Apple Music and Shazam
News
Top 10 Notebook LM Alternatives for Note Taking I
Computing

You Might also Like

Computing

Huawei, Xiaomi advertise in 2024 Spring Festival Gala · TechNode

1 Min Read
Computing

Top 10 Notebook LM Alternatives for Note Taking I

28 Min Read
Computing

Want Trustworthy AI? Put It on the Blockchain | HackerNoon

16 Min Read
Computing

Xiaomi 14 Ultra to debut at the MWC 2024 · TechNode

1 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?