LAS VEGAS–Gary Marcus wanted to make one thing clear in his Black Hat talk here Wednesday: He’s not an AI hater or an AI doomer.
But not even the most delusional AI could have hallucinated itself into thinking that this founder of two AI startups, cognitive scientist, and book author is a fan of the technology in general, much less the notion that it will give rise to AGI (artificial general intelligence), superintelligence, or any other rival or successor to human cognition.
“I think there’s a chance that it might be a positive thing for the world,” Marcus allowed at the start of a 40-minute conversation with Nathan Hamiel, senior director of research at Kudelski Security.
But there are so many risks–starting with the chance that skyrocketing electrical demand from AI data centers will start to short-circuit the rest of the US. “It’s not clear that it’s sustainable, and it’s not clear what will happen if it’s not,” Marcus said.
And all of those gigawatts poured into the pursuit of AGI probably won’t yield a product that’s worth using, because the large language models involved are “insanely insecure,” he continued.
“What these systems really are are mimics,” Marcus said. “They can say things that are like the things they heard before, but they’re conceptually very weak.”
So, attempting to put guardrails like “only write secure code” or “don’t explain biological weapons to anybody” will remain vulnerable to attacks because these LLMs lack a “world model” to ground their behavior.
“With a simple jailbreak, you get around that,” Marcus said. “That’s because they don’t have a concept of what a biological weapon is or what secure code is.”
His prediction of a short-term result from what he called this “kind of fakery”: “We’re going to start to see lots of banks go down and whatever because lots of bad code is going to be written because these systems don’t actually understand what it means to write secure code.”
Does that have investors growing weary of AGI promises? Nope!
“[Elon] Musk figured out that you can make promises and there’d be no responsibility for it,” he said, reminding listeners of Musk’s long history of overpromising and underdelivering with autonomous vehicles. “And people would give you capital based on those.”
Marcus cited OpenAI CEO Sam Altman as a master practitioner of this. “I read this morning OpenAI is going to try to sell stock at a $500 billion valuation,” he said. “This is a company that has never turned a profit, doesn’t seem particularly close to a profit, doesn’t have any technical moat.”
And yet Marcus was not ready to predict existential doom from the rise of the AI machines. Hamiel brought that up later on in the conversation when he asked Marcus about a post he wrote in July in which he raised his estimate of the probability of AI doom for humanity–in AI-industry shorthand, “p(doom)”–from 1% to 3%.
Get Our Best Stories!
Stay Safe With the Latest Security News and Updates
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
Despite feeling that “humans are actually pretty hard to eradicate,” he elevated those odds after thinking over the scope of Elon Musk’s AI ambitions and the careless way he has pursued them.
“What if some crazy guy had sort of arbitrary amounts of money, arbitrary amounts of ego, had huge legions of followers, was in the AI business, was sloppy in the way that he was building it, et cetera?” he asked. “And one day I woke up and realized, oh my God, this guy is actually here doing this stuff.”
But 3% is still a low chance of doom. Marcus said he sees more immediate harm coming from what AI does to the people leaning on it. Essentially, leaving them increasingly smooth-brained even as AI lowers the cost of creating misinformation.
“Critical thinking skills, you develop by using them,” he said. “And if you pass everything off to an LLM, you’re not going to develop those skills.”
Recommended by Our Editors
Marcus also said he worried about people’s ability to stay vigilant in overseeing AI.
“People are going to, like, maybe look at the first few snippets of code that their agent makes for them,” he said. “And then they’re going to zone out.”
All this leaves Marcus extremely skeptical of the notion, publicly seized upon by President Trump in his announcement of an “AI Action Plan,” that there is a race to build AI out there to be won.
“The winner of the race with China is not going to be the person that builds the larger LLM,” he said. “Nobody’s going to get a lasting advantage there.”
But somebody who discovers “something that is genuinely new” might be able to make that great leap forward. And in a conversation after the talk, Marcus suggested that Trump’s attacks on science increased the odds that this somebody would be working in China: “It’s probably not going to be the country that’s decimating its scientific institutions.”
So what positive potential does Marcus see in this field? He emphasized a point I keep hearing from people working to build AI applications: AI can do its best work when it is not coded to be anybody’s everyday check-in or personal companion and is instead trained to focus on specific, narrow domains and tasks.
Marcus gave the example of the Google subsidiary DeepMind’s AlphaFold, a model that analyzes protein structures for medical research and led to its developers winning a share of the Nobel Prize in chemistry last year.
“It’s not an LLM,” he said. “It’s a purpose-built thing. It doesn’t have all of these security problems. It’s not a chatbot trying to be one size fits all, that does everything.”

5 Ways to Get More Out of Your ChatGPT Conversations
About Rob Pegoraro
Contributor
