By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: NCSC warns of confusion over true nature of AI prompt injection | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > NCSC warns of confusion over true nature of AI prompt injection | Computer Weekly
News

NCSC warns of confusion over true nature of AI prompt injection | Computer Weekly

News Room
Last updated: 2025/12/09 at 1:15 AM
News Room Published 9 December 2025
Share
NCSC warns of confusion over true nature of AI prompt injection | Computer Weekly
SHARE

The UK’s National Cyber Security Centre (NCSC) has highlighted a potentially dangerous misunderstanding surrounding emergent prompt injection attacks against generative artificial intelligence (AI) applications, warning that many users are comparing them to more classical structured query language (SQL) injection attacks, and in doing so, putting their IT systems at risk of compromise.

While they share similar terminology, prompt injection attacks are categorically not the same as SQL injection attacks, said the NCSC in an advisory blog published on 8 December. Indeed, said the GCHQ-backed agency, prompt injection attacks may be much worse, and harder to counteract.

“Contrary to first impressions, prompt injection attacks against generative artificial intelligence applications may never be totally mitigated in the way SQL injection attacks can be,” wrote the NCSC’s research team.

In their most basic form, prompt injection attacks are cyber attacks against large language models (LLMs) in which threat actors take advantage of ability such models to respond to natural language queries and manipulate them into producing undesirable outcomes – for examply, leaking confidential data, creating disinformation, or potentially guiding on the creation of malicious phishing emails or malware.

SQL injection attacks, on the other hand, are a class of vulnerability that enable threat actors to mess with an application’s database queries by inserting their own SQL code into an entry field, giving them the ability to execute malicious commands to, for example, steal or destroy data, conduct denial of service (DoS) attacks, and in some cases even to enable arbitrary code execution.

SQL injection attacks have been around a long time and are very well understood. They are also relatively simple to address, with most mitigations enforcing a separation between instructions and sensitive data; the use of parameterised queries in SQL, for example, means that whatever the input may be, the database engine cannot interpret it as an instruction.

While prompt injection is conceptually similar, the NCSC believes defenders may be at risk of slipping up because LLMs are not able to distinguish between what is an instruction and what is data.

“When you provide an LLM prompt, it doesn’t understand the text it in the way a person does. It is simply predicting the most likely next token from the text so far,” explained the NCSC team.

“As there is no inherent distinction between ‘data’ and ‘instruction’, it’s very possible that prompt injection attacks may never be totally mitigated in the way that SQL injection attacks can be.”

The agency is warning that unless this spreading misconception is addressed in short order, organisations risk becoming data breach victims at a scale unseen since SQL injection attacks were widespread 10 to 15 years ago, and probably exceeding that.

It further warned that many attempts to mitigate prompt injection – although well-intentioned – in reality do little more than try to overlay the concepts of instructions and data on a technology that can’t tell them apart.

Should we stop using LLMs?

Most objective authorities on the subject concur that the only way to avoid prompt injection attacks is to stop using LLMs altogether, but since this is now no longer really possible, the NCSC is now calling for efforts to turn to reducing the risk and impact of prompt injection within the AI supply chain.

It called for AI system designers, builders and operators to acknowledge that LLM systems are “inherently confusable” and account for manageable variables during the design and build process.

It laid out four steps that taken together, may help alleviate some of the risks associated with prompt injection attacks.

  1. First, and most fundamentally, developers building LLMs need to be aware of prompt injection as an attack vector, as it is not yet well-understood. Awareness also needs to be spread across organisations adopting or working with LLMs, while security pros and risk owners need to incorporate prompt injection attacks into their risk management strategies.
  2. It goes without saying that LLMs should be secure by design, but particular attention should be paid to hammering home the fact LLMs are inherently confusable, especially if systems are calling tools or using APIs based on their output. A securely-designed LLM should focus on deterministic safeguards to constrain an LLM’s actions rather than just trying to stop malicious content from reaching it. The NCSC also highlighted the need to apply principles of least privilege to LLMs – they cannot have any more privileges than the party/ies interacting with them does.
  3. It is possible to make it somewhat harder for LLMs to act on instructions that may be included within data fed to them – researchers at Microsoft, for example, found that using different techniques to mark data as separate to instructions can make prompt injection harder. However, at the same time it is important to be wary of approaches such as deny-listing or blocking phrases such as ‘ignoring previous instructions, do Y’, which are completely ineffective because there are so many possible ways for a human to rephrase that prompt, and to be extremely sceptical of any technology supplier that claims it can stop prompt injection outright.
  4. Finally, as part of the design process, organisations should understand both how their LLMs might be corrupted and the goals an attacker might try to achieve, and what normal operations look like. This means organisations should be logging plenty of data – up to and even including saving the full input and output of the LLM – and any tool use or API calls. Live monitoring to respond to failed tool or API calls is essential, as detecting these could, said the NCSC, be a sign a threat actor is honing their cyber attack.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Today's NYT Wordle Hints, Answer and Help for Dec. 9 #1634 – CNET Today's NYT Wordle Hints, Answer and Help for Dec. 9 #1634 – CNET
Next Article A movie scene traumatized an entire generation every time they bathed in the sea. And it was all due to a mistake A movie scene traumatized an entire generation every time they bathed in the sea. And it was all due to a mistake
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Nintendo Switch 2 launches in China with over 400,000 pre-orders on JD.com · TechNode
Nintendo Switch 2 launches in China with over 400,000 pre-orders on JD.com · TechNode
Computing
5 Essential Google Chrome Extensions You Need To Install Today – BGR
5 Essential Google Chrome Extensions You Need To Install Today – BGR
News
One reason to buy Palantir stock before 2026 and one reason to sell
One reason to buy Palantir stock before 2026 and one reason to sell
News
Building a Predictable Lead Generation Machine for Your Business |
Building a Predictable Lead Generation Machine for Your Business |
Computing

You Might also Like

5 Essential Google Chrome Extensions You Need To Install Today – BGR
News

5 Essential Google Chrome Extensions You Need To Install Today – BGR

12 Min Read
One reason to buy Palantir stock before 2026 and one reason to sell
News

One reason to buy Palantir stock before 2026 and one reason to sell

7 Min Read
I tried Google’s 3 new Android XR glasses prototypes, and they’re incredible
News

I tried Google’s 3 new Android XR glasses prototypes, and they’re incredible

23 Min Read
Apple Releases Safari Technology Preview 233 With Bug Fixes and Performance Improvements
News

Apple Releases Safari Technology Preview 233 With Bug Fixes and Performance Improvements

7 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?