LAS VEGAS—Many of the talks at the HumanX conference about AI here bubbled over with positivity about the technology’s potential, but its last afternoon of programming featured a speaker with a different message: slow your roll.
“When it comes to AI and cyber, we’re not even close to 1% done here,” said Alex Stamos, chief information security officer at infosec firm SentinelOne. “The changes that are going to happen in this world are all ahead of us.”
Much of the discussion about AI security has involved the failures of AI models. Stamos outlined three different ways they can malfunction: traditional security failings in which an attacker gets “a system to violate the basic precepts of what its developer wanted it to do”; safety malfunctions in which it “causes harm to a human being”; and “alignment” failures in which an AI system errs on its own—like a customer-service chatbot “getting frustrated and calling my customers names.”
“Mixing these things up is causing a lot of trouble for folks who mix them up because then you’re not able to distinguish and to test these things separately,” Stamos warned.
But the bigger picture of AI security involves what AI tools are doing to change Stamos’s own business. He previously worked as the CISO at Yahoo and then Facebook and now also teaches computer-science classes at Stanford University.
“The future for cyber is human beings supervising machine-to-machine combat,” he said.
On the defensive side, AI systems already automate much of the monitoring and analysis that humans in security operations centers did years ago, leaving it up to that human to decide what defensive action to take in response.
“Instead of having to spend 30 minutes doing an analysis of this dangerous behavior, the SOC analyst gets all that information put in front of them, and they can just make a decision in 10, 15 seconds,” he said.
The next step is taking that human out of that loop. “That is because attackers are no longer going to themselves be in the loop very much longer,” he said. “And that’s because we’re going to start to see a lot more innovation from the financially motivated actors.”
He pointed to one in particular with a record of successful theft, including a recent heist of $1.4 billion in cryptocurrency: The Democratic People’s Republic of Korea, with an “intelligence service that has to make a profit.” (Stamos paused to mock North Korea’s full name: “If you have the word ‘Democratic’ in the title of your country, you’re not.”)
Groups like North Korea’s Lazarus Group differ from traditional state-sponsored attackers that operate out of these national intelligence agencies because they don’t have to be as careful.
“If you’re the Ministry of State Security of the People’s Republic of China, and you’ve been told you need to break into Lockheed Martin, you have to get it right,” Stamos said. “You can’t screw up, so your use of AI will be very, very limited.”
But Lazarus and other ransomware attackers don’t have to worry about that. They just need to get into as many systems as possible and see how many targets they can trick into paying up.
“They might hit 10,000 machines, but they might only end up successfully ransoming 10 of them,” Stamos said. AI already provides help with that: “They like to use AI for negotiations because it turns out a strung-out Russian 19-year-old neither has great language skills or good negotiation skills, right?”
Attackers, too, will move on to applying AI to other parts of their work—“training it to do all the different parts of their kill chain.”
That includes using commercially available software to ease the task, something Stamos illustrated by walking the audience through how he could use Microsoft’s Copilot to generate malware components.
“You can’t ask it, ‘write me a Windows worm,'” he said. “But what you can do is you can ask it for all the parts you need for a Windows worm.”
Recommended by Our Editors
That will upend the assumption among infosec defenders that most attackers have to buy known malware off the black market; somebody with a basic grasp of Windows C++ can write new malware with AI help.
“This is going to be really, really interesting,” he said. “By interesting, I mean bad.”
Calling information security “the only part of computer science that gets worse every year,” Stamos offered one career tip: “If you do not want to be put out of a job, I strongly recommend pivoting to security because it’s not going to get better anytime soon.”
On that note, he advised attendees to be cautious in deploying AI—advice that, in one case, undermined his earlier prediction that defenders would have to trust AI to take automatic actions against attacks in progress.
“Do not put a modern AI system in a situation where on one side of it, you have high-privilege operations or high-privilege knowledge or data and low-privilege users on the other side,” he said. “Do not have your AI system make any decisions that are security sensitive.”
Expect things to get worse, Stamos concluded: “Have humility because the truth is, 95% of the bugs in your AI system have not been invented yet.”
Disclosure: I moderated three panels at HumanX, with the organizers covering my airfare and lodging.
Like What You’re Reading?
This newsletter may contain advertising, deals, or affiliate links.
By clicking the button, you confirm you are 16+ and agree to our
Terms of Use and
Privacy Policy.
You may unsubscribe from the newsletters at any time.
About Rob Pegoraro
Contributor
