By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: This AI Skeptic Thinks AI Is Bringing Human Brains Down to Its Level
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > This AI Skeptic Thinks AI Is Bringing Human Brains Down to Its Level
News

This AI Skeptic Thinks AI Is Bringing Human Brains Down to Its Level

News Room
Last updated: 2025/08/07 at 9:15 PM
News Room Published 7 August 2025
Share
SHARE

LAS VEGAS–Gary Marcus wanted to make one thing clear in his Black Hat talk here Wednesday: He’s not an AI hater or an AI doomer.

But not even the most delusional AI could have hallucinated itself into thinking that this founder of two AI startups, cognitive scientist, and book author is a fan of the technology in general, much less the notion that it will give rise to AGI (artificial general intelligence), superintelligence, or any other rival or successor to human cognition. 

“I think there’s a chance that it might be a positive thing for the world,” Marcus allowed at the start of a 40-minute conversation with Nathan Hamiel, senior director of research at Kudelski Security. 

But there are so many risks–starting with the chance that skyrocketing electrical demand from AI data centers will start to short-circuit the rest of the US. “It’s not clear that it’s sustainable, and it’s not clear what will happen if it’s not,” Marcus said.

And all of those gigawatts poured into the pursuit of AGI probably won’t yield a product that’s worth using, because the large language models involved are “insanely insecure,” he continued. 

“What these systems really are are mimics,” Marcus said. “They can say things that are like the things they heard before, but they’re conceptually very weak.”

So, attempting to put guardrails like “only write secure code” or “don’t explain biological weapons to anybody” will remain vulnerable to attacks because these LLMs lack a “world model” to ground their behavior.

“With a simple jailbreak, you get around that,” Marcus said. “That’s because they don’t have a concept of what a biological weapon is or what secure code is.” 

His prediction of a short-term result from what he called this “kind of fakery”: “We’re going to start to see lots of banks go down and whatever because lots of bad code is going to be written because these systems don’t actually understand what it means to write secure code.”

Does that have investors growing weary of AGI promises? Nope! 

“[Elon] Musk figured out that you can make promises and there’d be no responsibility for it,” he said, reminding listeners of Musk’s long history of overpromising and underdelivering with autonomous vehicles. “And people would give you capital based on those.” 

Marcus cited OpenAI CEO Sam Altman as a master practitioner of this. “I read this morning OpenAI is going to try to sell stock at a $500 billion valuation,” he said. “This is a company that has never turned a profit, doesn’t seem particularly close to a profit, doesn’t have any technical moat.”

And yet Marcus was not ready to predict existential doom from the rise of the AI machines. Hamiel brought that up later on in the conversation when he asked Marcus about a post he wrote in July in which he raised his estimate of the probability of AI doom for humanity–in AI-industry shorthand, “p(doom)”–from 1% to 3%. 

Newsletter Icon

Get Our Best Stories!

Stay Safe With the Latest Security News and Updates


SecurityWatch Newsletter Image

Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.

Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Despite feeling that “humans are actually pretty hard to eradicate,” he elevated those odds after thinking over the scope of Elon Musk’s AI ambitions and the careless way he has pursued them. 

“What if some crazy guy had sort of arbitrary amounts of money, arbitrary amounts of ego, had huge legions of followers, was in the AI business, was sloppy in the way that he was building it, et cetera?” he asked. “And one day I woke up and realized, oh my God, this guy is actually here doing this stuff.”

But 3% is still a low chance of doom. Marcus said he sees more immediate harm coming from what AI does to the people leaning on it. Essentially, leaving them increasingly smooth-brained even as AI lowers the cost of creating misinformation.

“Critical thinking skills, you develop by using them,” he said. “And if you pass everything off to an LLM, you’re not going to develop those skills.”

Recommended by Our Editors

Marcus also said he worried about people’s ability to stay vigilant in overseeing AI.

“People are going to, like, maybe look at the first few snippets of code that their agent makes for them,” he said. “And then they’re going to zone out.” 

All this leaves Marcus extremely skeptical of the notion, publicly seized upon by President Trump in his announcement of an “AI Action Plan,” that there is a race to build AI out there to be won.

“The winner of the race with China is not going to be the person that builds the larger LLM,” he said. “Nobody’s going to get a lasting advantage there.”

But somebody who discovers “something that is genuinely new” might be able to make that great leap forward. And in a conversation after the talk, Marcus suggested that Trump’s attacks on science increased the odds that this somebody would be working in China: “It’s probably not going to be the country that’s decimating its scientific institutions.”

So what positive potential does Marcus see in this field? He emphasized a point I keep hearing from people working to build AI applications: AI can do its best work when it is not coded to be anybody’s everyday check-in or personal companion and is instead trained to focus on specific, narrow domains and tasks. 

Marcus gave the example of the Google subsidiary DeepMind’s AlphaFold, a model that analyzes protein structures for medical research and led to its developers winning a share of the Nobel Prize in chemistry last year. 

“It’s not an LLM,” he said. “It’s a purpose-built thing. It doesn’t have all of these security problems. It’s not a chatbot trying to be one size fits all, that does everything.”

5 Ways to Get More Out of Your ChatGPT Conversations

PCMag Logo

5 Ways to Get More Out of Your ChatGPT Conversations

About Rob Pegoraro

Contributor

Rob Pegoraro

Rob Pegoraro writes about interesting problems and possibilities in computers, gadgets, apps, services, telecom, and other things that beep or blink. He’s covered such developments as the evolution of the cell phone from 1G to 5G, the fall and rise of Apple, Google’s growth from obscure Yahoo rival to verb status, and the transformation of social media from CompuServe forums to Facebook’s billions of users. Pegoraro has met most of the founders of the internet and once received a single-word email reply from Steve Jobs.

Read Rob’s full bio

Read the latest from Rob Pegoraro

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Tesla shuts down Dojo, the AI training supercomputer that Musk said would be key to full self-driving | News
Next Article US military finds a good use for Tesla Cybertruck: missile target practice
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

OpenAI Reveals New GPT-5 Models – BGR
News
China breaks record for domestic trips made during Lunar New Year holiday period · TechNode
Computing
Save $50 on the Ember Smart Mug 2, my favorite tech gadget
News
China’s Li Auto apologizes for allegedly exaggerating crash test results · TechNode
Computing

You Might also Like

News

OpenAI Reveals New GPT-5 Models – BGR

5 Min Read
News

Save $50 on the Ember Smart Mug 2, my favorite tech gadget

4 Min Read
News

Apple’s lock on iPhone browser engines gets a December deadline

1 Min Read
News

AT&T customers say a payment glitch just emptied their accounts – check yours now

6 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?