By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Popular LLMs dangerously vulnerable to iterative attacks, says Cisco | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Popular LLMs dangerously vulnerable to iterative attacks, says Cisco | Computer Weekly
News

Popular LLMs dangerously vulnerable to iterative attacks, says Cisco | Computer Weekly

News Room
Last updated: 2025/11/07 at 10:12 PM
News Room Published 7 November 2025
Share
Popular LLMs dangerously vulnerable to iterative attacks, says Cisco | Computer Weekly
SHARE

Some of the world’s most widely used open-weight generative AI (GenAI) services are profoundly susceptible to so-called “multi-turn” prompt injection or jailbreaking cyber attacks, in which a malicious actor is able to coax large language models (LLMs) into generating unintended and undesirable responses, according to a research paper published by a team at networking giant Cisco.

Cisco’s researchers tested Alibaba Qwen3-32B, Mistral Large-2, Meta Llama 3.3-70B-Instruct, DeepSeek v3.1, Zhipu AI GLM-4.5-Air, Google Gemma-3-1B-1T, Microsoft Phi-4, and OpenAI GPT-OSS-2-B, engineering multiple scenarios in which the various models’ output disallowed content, with success rates ranging from 25.86% against Google’s model, up to 92.78% in the case of Mistral.

The report’s authors, Amy Chang and Nicholas Conley, alongside contributors Harish Santhanalakshmi Ganesan and Adam Swanda, said this represented a two to tenfold increase over single-turn baselines.

“These results underscore a systemic inability of current open-weight models to maintain safety guardrails across extended interactions,” they said.

“We assess that alignment strategies and lab priorities significantly influence resilience: capability-focused models such as Llama 3.3 and Qwen 3 demonstrate higher multi-turn susceptibility, whereas safety-oriented designs such as Google Gemma 3 exhibit more balanced performance.

“The analysis concludes that open-weight models, while crucial for innovation, pose tangible operational and ethical risks when deployed without layered security controls … Addressing multi-turn vulnerabilities is essential to ensure the safe, reliable and responsible deployment of open-weight LLMs in enterprise and public domains.”

What is a multi-turn attack?

Multi-turn attacks take the form of iterative “probing” of an LLM to expose systemic weaknesses that are usually masked because models can better detect and reject isolated adversarial requests.

Such an attack could begin with an attacker making benign queries to establish trust, before subtly introducing more adversarial requests to accomplish their actual goals.

Prompts may be framed with terminology such as “for research purposes” or “in a fictional scenario”, and attackers may ask the models to engage in roleplay or persona adoption, introduce contextual ambiguity or misdirection, or to break down information and reassemble it – among other tactics.

Whose responsibility?

The researchers said their work underscored the susceptibility of LLMs to adversarial attacks and that this was a source of particular concern given all of the models tested were open-weight, which in layman’s terms means anybody who cares to do so is able to download, run and even make changes to the model.

They highlighted as an area of particular concern three more susceptible models – Mistral, Llama and Qwen – which they said had probably been shipped with the expectation that developers would add guardrails themselves, compared with Google’s model, which was most resistant to multi-turn manipulation, or OpenAI’s and Zhipu’s, which both rejected multi-turn attempts more than 50% of the time.

“The AI developer and security community must continue to actively manage these threats – as well as additional safety and security concerns – through independent testing and guardrail development throughout the lifecycle of model development and deployment in organisations,” they wrote.

“Without AI security solutions – such as multi-turn testing, threat-specific mitigation and continuous monitoring – these models pose significant risks in production, potentially leading to data breaches or malicious manipulations,” they added.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Undercurrents in Double 11: What’s changed and what’s not · TechNode Undercurrents in Double 11: What’s changed and what’s not · TechNode
Next Article AI Ranked the Catchiest Songs in History—Do You Agree With the List? AI Ranked the Catchiest Songs in History—Do You Agree With the List?
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Temu’s Chinese suppliers protest in Guangzhou over penalty policy · TechNode
Temu’s Chinese suppliers protest in Guangzhou over penalty policy · TechNode
Computing
Sora 2 vs Veo 3.1: I tested both AI video generators with 7 audio prompts — here’s the winner
Sora 2 vs Veo 3.1: I tested both AI video generators with 7 audio prompts — here’s the winner
News
Best Samsung TV deal: Save ,100 on Samsung 65-inch Class S84F OLED 4K TV
Best Samsung TV deal: Save $1,100 on Samsung 65-inch Class S84F OLED 4K TV
News
Game developer miHoYo plans to build “Shanghai Akihabara” · TechNode
Game developer miHoYo plans to build “Shanghai Akihabara” · TechNode
Computing

You Might also Like

Sora 2 vs Veo 3.1: I tested both AI video generators with 7 audio prompts — here’s the winner
News

Sora 2 vs Veo 3.1: I tested both AI video generators with 7 audio prompts — here’s the winner

10 Min Read
Best Samsung TV deal: Save ,100 on Samsung 65-inch Class S84F OLED 4K TV
News

Best Samsung TV deal: Save $1,100 on Samsung 65-inch Class S84F OLED 4K TV

3 Min Read
SATELLAI Collar vs Halo Collar 4 vs SpotOn: Which GPS dog collar performs best?
News

SATELLAI Collar vs Halo Collar 4 vs SpotOn: Which GPS dog collar performs best?

19 Min Read
Resilience for resilience: Managing burnout among cyber leaders | Computer Weekly
News

Resilience for resilience: Managing burnout among cyber leaders | Computer Weekly

7 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?