By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: How AI can attack corporate decision-making | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > How AI can attack corporate decision-making | Computer Weekly
News

How AI can attack corporate decision-making | Computer Weekly

News Room
Last updated: 2025/05/04 at 9:31 AM
News Room Published 4 May 2025
Share
SHARE

Given that the goal of developing a generative artificial intelligence (GenAI) model is to take human instructions and provide a helpful app, what happens if those human instructions are malicious? That was the question raised during a demonstration of AI vulnerabilities presented at the Centre for Emerging Technology and Security (CETaS) Showcase 2025 event in London.

“A language model is designed to summarise large amounts of information,” said Matthew Sutton, solution architect at Advai. “The aim is to give it as much test information as possible and let it handle that data.”

Sutton raised the question of what would happen if someone using a large language model (LLM) asked it to produce disinformation or harmful content, or reveal sensitive information. “What happens if you ask the model to produce malicious code, then go and execute it, or attempt to steal somebody’s data?” he said.

During the demo, Sutton discussed the inherent risk of using retrieval augmented generation (RAG) that has access to a corpus of corporate data. The general idea behind using a RAG system is to provide context that is then combined with external inference from an AI model.

“If you go to ChatGPT and ask it to summarise your emails, for example, it will have no idea what you’re talking about,” he said. “A RAG system takes external context as information, whether that be documents, external websites or your emails.”

According to Sutton, an attacker could use the fact that the AI system reads email messages and documents stored internally to place malicious instructions in an email message, document or website. He said these instructions are then picked up by the AI model, which enables the harmful instruction to be executed. 

“Large language models give you this ability to interact with things through natural language,” said Sutton. “It’s designed to be as easy as possible, and so from an adversary point of view, this means that it is easier and has a lower entry barrier to create logic instructions.”

This, according to Sutton, means anybody who wants to disrupt a corporate IT system could look at how they could use an indirect prompt injection attack to insert instructions hidden in normal business correspondence.

If an employee is interacting directly with the model and the harmful instructions have found their way into the corporate AI system, then the model may present harmful or misleading content to that person.

For example, he said people who submit bids for new project work could provide instructions hidden in their bid, knowing that large language model will be used to summarise the text of their submission, which could be used to influence their bid more positively than rival bids, or instruct the LLM to ignore other bids.

For Sutton, this means there is quite a broad range of people that have the means to influence an organisation’s tender process. “You don’t need to be a high-level programmer to put in things like that,” he said.

From an IT security perspective, Sutton said an indirect prompt injection attack means people need to be cognisant as to the information being provided to the AI system, since this data is not always reliable.

Generally, the output from an LLM is an answer to a query followed by additional contextual information, that shows the users how the information is referenced to output the answer. Sutton pointed out that people should question the reliability of this contextual information, but noted that it would be unrealistic and undermine the usefulness of an LLM if people had to check the context every single time it generated a response.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article The best laptops for photo editing, according to expert editors and photographers
Next Article Thermaltake CTE E550 TG Review: Turn Your PC Build Sideways, If You Dare
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

The AirPods Pro 2 are still a shocking 40% off even after Prime Day
News
Exclusive: New Snapdragon wearables chip in the works, could supercharge Wear OS watch performance
Gadget
Google exec: ‘We’re going to be combining ChromeOS and Android’
News
NVIDIA CEO to visit Beijing for media briefing and Supply Chain Expo on Wednesday · TechNode
Computing

You Might also Like

News

The AirPods Pro 2 are still a shocking 40% off even after Prime Day

4 Min Read
News

Google exec: ‘We’re going to be combining ChromeOS and Android’

2 Min Read
News

The strongest Q1 results of the vertical software group

7 Min Read
News

Texas Logistic Businesses Lead on Pay Hikes, Work-Life Balance

1 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?