By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Threats versus potential benefits: Weighing up the enterprise risk of embracing AI | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > Threats versus potential benefits: Weighing up the enterprise risk of embracing AI | Computer Weekly
News

Threats versus potential benefits: Weighing up the enterprise risk of embracing AI | Computer Weekly

News Room
Last updated: 2025/05/09 at 4:34 PM
News Room Published 9 May 2025
Share
SHARE

Throwing artificial intelligence (AI) tools at a wall and seeing what sticks will most probably deliver mixed results. Therefore, to realise the opportunities, it should pay to scope and minimise potential risks in advance.

After all, even well-resourced companies are still struggling to figure out their approach to AI, as Dael Williamson, EMEA chief technology officer at data analytics and AI software provider Databricks, confirms. 

“For instance, copying and pasting from one proprietary thing to another, and then another, comes with an inherent ‘tax’ on data integrity. You need all the checks and balances. And all companies can experience this, because all companies have siloes,” Williamson observes.

If your data is problematic or just plain wrong, inferencing will suffer, and you probably won’t get the return on investment (ROI). Then there’s the risk of choosing the wrong language model for your needs.

“You have to train the models. But the inference is the meat and potatoes [of] what you’re actually going for,” Williamson notes. “AI can be incredibly useful. But it’s also tricky.”

Securing AI also presents risk, and not just from AI-enabled attacks, such as more sophisticated social engineering, prompt injections or slop-squatting.

Richard Cassidy, EMEA chief information security officer at cloud data management company Rubrik, says if you do not lean in on the “how” of AI goals, you can introduce security concerns of different kinds.

For instance, AI can become a “noise generator” that distracts users – including from real incidents – and increases waste and costs. In addition, carefully devised security controls might not carry across to the AI workflow.

On top of that, the relevant digital skills can be lacking, and workflows often are not yet sufficiently digitised, he says.

If the underlying processes are flawed, AI cannot fix that. It will just amplify chaos
Richard Cassidy, Rubrik

Risk assessment and prioritisation

“People don’t ask what AI adoption looks like in practice,” he says. “CISOs can build data lakes of epic proportions, with multifactor authentication, user attribution, secure access, and so on. Then AI comes along and maps a numerical representation into its workflow, embedding models, and then vector databases, getting the outputs through retrieval augmented generation (RAG) workflows and so on, and the security controls are lost.”

This matches Office of National Statistics (ONS) figures that suggest the most common barriers to AI adoption are difficulty identifying activities or business use cases (39%) and cost (21%). Some 16% of firms cited a lack of AI expertise and skills.

“If the underlying processes are flawed, AI cannot fix that. It will just amplify chaos,” says Cassidy.

“As always, start with the problem, not the hype, and don’t adopt AI just because you think you should. Ensure you pinpoint specific business challenges – customer service bottlenecks or slow cycles – and build from there.”

Reduce risk with clear usage policies and guardrails for single-workflow pilots – perhaps summarising reports, assisting queries, or automating invoice generation – then measure the impact.

Did it work? Did it reduce cost or increase value? Learn from that and build a roadmap from evidence, not enthusiasm, Cassidy advises.

Further mitigation strategies

Regardless, you likely do not want to jump into AI straight away, and you do not want to plug all your sensitive or regulated data into an off-the-shelf model to train it either, adds Tony Lock, distinguished analyst at IT market watcher Freeform Dynamics.

“Once you put data into the language model, you can’t take it out again. It’s just subsumed into the pattern,” says Lock. “That’s why RAG is around, so instead of feeding information into an LLM, you cleanse everything.”

And what if your model is pulled from the market? While open source, parallel developments and application programming interface (API) gateways can help protect organisations, Lock suggests we also cannot know exactly how risks will play out when it comes to, say, OpenAI losing an in-progress lawsuit about its rights to use others’ intellectual property.

Once you put data into the language model, you can’t take it out again. It’s just subsumed into the pattern. That’s why RAG is around, so instead of feeding information into an LLM, you cleanse everything
Tony Lock, Freeform Dynamics

“If you’re told by a judge that you need to take all that information out, that you’re not allowed to use it for training purposes, you’re likely going to have the entire language model start again with properly secured data that you’ve acquired,” says Lock.

Penalties could ensue. How will the AI suppliers then respond? Will they pass on related costs to customers? Will customers themselves be penalised? These are unanswered questions that might require specific legal advice.

Before you bet on using specific data in a particular model, it might be wise to remember that there are multiple AI-related lawsuits in the pipeline.

National regulations are complicating the environment. For example, the UK government currently favours some yet-to-be-devised sort of “opt out” of AI process for intellectual property (IP) owners.

Yet in the European Union, for instance, that will not work, because everything typically has to be “opt in”, notes Lock. And to opt in, users have to be told exactly how their IP is going to be used.

“Maybe the US courts will not enforce action. But then again, all those companies have European, UK, Japanese subsidiaries that could become liable, maybe even the local CEO,” he says.

At the same time, it can pay to wait. After all, there can be only one “first mover”; later entrants may benefit from a relative lack of obstacles that early adopters had to tackle.

The top recommendation

Databricks’ Williamson recommends enterprises get their data house in order first, even if that delays adoption. “Data processing and organising is hard, even for companies with money and a huge in-house team,” he says.

Usually, data is just not ready for AI. That means a need to inventory, audit and map all structured and unstructured data. A cleaner, deduplicated, standardised, accurate and relevant data foundation may require silo consolidation too, well before adding AI on top, he points out.

The good news is that fixing data “in the broader sense” will buy time for enterprises to consider their approach and generate benefits – including cost savings, storage efficiencies and the removal of legacy or shadow IT – for the whole business.

Rubrik’s Cassidy believes opportunities are typically about “smart delegation” of tasks and the democratisation of data-based intelligence across the business. “And AI offers SMEs a genuine levelling-up capability.”

Implementation plan and timelines

Robbie Jerrom, senior principal technologist for AI at Red Hat, says enterprises should focus on working out what they should do with AI, and take as much time as they need to do that.

“First, understand your need, then narrow the use case. Don’t try to boil the ocean,” says Jerrom.

One thing organisations can do is calculate the tokens required for a given AI enablement, although it is not always easy.

First, understand your need, then narrow the use case. Don’t try to boil the ocean
Robbie Jerrom, Red Hat

“Writing some small bits of Python code, maybe 10 minutes’ work, might use 45,000 tokens. Map it back to cost, and it’s maybe a couple of cents. But if you scale that up, and have 10 developers doing it all day long, how much is it? Every time an AI agent goes out and talks to something, for example, it uses tokens.”

Pick something small, get some experience running something trackable, and build something from which the business will learn.

Sandboxing can reduce risk, especially when considering more autonomous systems such as agents. Examine whether it can be trained in the company’s static policies, for example.

Perhaps ask a model to review a contract, compare it with previous contracts, and show the differences, confusions or irregularities. You might notice two irregularities, but the model might highlight something different to think about in addition. Changes over years, for instance, might signal a possible challenge in the customer relationship that had not been previously picked up.

AI can help discipline your thinking and apply method. Afterwards, double-check results and re-evaluate. Can you tune the model to better align with need, or try an alternative?

“Some of the boring use cases are where you’ll start to see value,” says Jerrom, noting that while generative AI (GenAI) makes mistakes, so do humans.

Education and training for workers are equally crucial. Most will need help learning how best to use their AI services.

“This can get you into a lot of hot water,” warns Jerrom. “AI is already everywhere.”

Next steps for enterprise AI adoption

Sue Daley, director of technology and innovation at TechUK, says all AI has “huge potential” for businesses. Regardless of shape, size or sector, it is key to understand exactly how AI can drive efficiencies and effectiveness. “What do you want it to do and what are you looking to achieve?”

As with any other technology, is AI the appropriate tool? Sometimes benefits might be agentic, while others might require a small language model or very specific approach.

“Small language models may be more appropriate for a specific business need or issue in their supply chain, logistics or operations. Context will be so important,” says Daley.

Play “mindfully” in a sandbox or safe environment to learn what AI can do. Examine compliance, security policies and practice, and ethics around responsible innovation. Consider upskilling needs. Acquire perspective from people and build cross-functional teams across the business.

“Start with education and awareness. Consider your organisation at all levels, from board level to middle management and individual workers,” says Daley. “Find ways to bring people on the journey with you. It’s a change management process, affecting a lot of people’s jobs.”

Even if enterprises think of GenAI tools as just another chatbot, many chatbots have not satisfied customers. Benefiting from AI requires serious thought, including on how the next version or product is evolving. Again, the top tip is that outputs can only be as good as your data inputs, she says.

Freeform Dynamics’ Lock adds: “Understand how to get AI working so your people say it actually helps them, rather than it being just something else to ‘get around’. When they’re picking AI up on their own, remember some might be doing advantageous things you hadn’t thought of – or  something they shouldn’t. User effectiveness and happiness are crucial.”

Finally, don’t forget there are different classes of AI – some of which the business may already have experience with.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Cotton unveils legislation requiring location verification for advanced AI chip exports
Next Article Border agents are going to photograph everyone leaving the US by car
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Google will pay Texas $1.4 billion to settle privacy lawsuits | News
News
Free Figma Resume Templates to Design a Standout Resume
Computing
What phones are Android Authority readers using? (2025 Edition)
News
How Cell-Based Architecture Helps Big Systems Scale | HackerNoon
Computing

You Might also Like

News

Google will pay Texas $1.4 billion to settle privacy lawsuits | News

3 Min Read
News

What phones are Android Authority readers using? (2025 Edition)

7 Min Read
News

Mega Millions players urged to check tickets as $1m prize is set to expire

4 Min Read
News

App got me £6 parking in London all day & Brits could earn £100s a month with it

10 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?