By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI claims are cheap: The challenge is to work out what’s real | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > AI claims are cheap: The challenge is to work out what’s real | Computer Weekly
News

AI claims are cheap: The challenge is to work out what’s real | Computer Weekly

News Room
Last updated: 2026/01/26 at 7:57 PM
News Room Published 26 January 2026
Share
AI claims are cheap: The challenge is to work out what’s real | Computer Weekly
SHARE

AI security tooling is already mainstream, and 2026 will only amplify the noise. Expect more ‘AI-washed’ claims, bigger promises, and rising fear, uncertainty and doubt (FUD). The real skill will be separating genuine capability from clever packaging.

AI in security isn’t a futuristic add-on anymore. It’s already embedded across tools many organisations use daily: email security, endpoint detection, SIEM/SOAR, identity protection, data loss prevention, vulnerability management, and managed services. Vendors have relied on machine learning for years; generative artificial intelligence (GenAI) is simply the latest label stuck on the front.

What changes in 2026 is the story being sold. Boards are asking about AI. Procurement teams are adding AI clauses. CISOs are under pressure to be seen to “do something with AI”. That creates fertile ground for marketing: more webinars, more whitepapers, bolder claims, and a fresh wave of “we can automate your SOC” pitches.

Alongside that comes the familiar FUD cycle: attackers are using AI, so if you don’t buy our AI, you’re behind. There’s a grain of truth – attackers do use automation and will increasingly use AI – but it’s often used to rush buyers into tools that haven’t proven they reduce risk in your environment. It’s the same sales playbook as ever, just wearing an AI trenchcoat.

A more useful way to frame this is simple: in 2026 you’re not deciding whether to adopt AI in security; you’re deciding whether a specific product’s AI features are mature enough to help you without introducing new risk. Some AI features genuinely save analyst time or improve detection. Others are little more than chatbots bolted onto dashboards.

So, the first takeaway is a warning label: AI claims are cheap. The hard part is working out what’s real and measurable versus what’s mostly branding – and ensuring the rush to look modern doesn’t quietly create new governance problems. These might include data leakage, model risk, audit gaps, supplier lock-in, or, in defence and CNI environments, new forms of operational fragility.

Start with outcomes and threat model, not features. Anchor decisions to your top risks – identity abuse, ransomware, data exfiltration, third-party exposure, or OT/CNI constraints – and to the controls you genuinely need to improve.

That leads to the second principle: don’t buy an AI cyber tool because it sounds clever. Buy something because it fixes a real problem you already have.

Most organisations have a small number of recurring pain points: alert overload, slow investigations, vulnerability backlogs, poor visibility of internet-exposed assets, supplier connections they don’t fully understand, identity sprawl, or logging gaps. If you start with “we need an AI product”, you’ll judge vendors on demos and buzzwords. If you start with “we need to reduce account takeover” or “we need to halve investigation time”, you can judge tools on whether they deliver that outcome.

That’s what threat modelling means in plain terms: what are you actually trying to defend against, in your environment? A bank will prioritise identity fraud, insider risk, and regulatory evidence. A defence supplier may focus on IP theft and supply-chain compromise. A CNI operator may treat availability and safety as absolute constraints, with little tolerance for automation that could disrupt operations. The same AI tool can be a good fit in one context and dangerous in another.

Practically, write down your top risks and the few improvements you want this quarter or year, then test every sales pitch against that list.

For example, a vendor promises ‘autonomous response’. It sounds compelling – until you realise your real problem is incomplete identity logging and endpoints that don’t reliably report. In that case, autonomy is lipstick on a pig. Outcomes first, features second.

It’s also worth learning to spot hype patterns early. Red flags include vague ‘autonomous SOC’ claims, no measurable improvement in detection or response, glossy demos with no reproducible testing, black-box models with no auditability, and pricing that scales with panic rather than proven risk reduction.

Buy like a grown-up: governance, evidence, and an exit plan. Demand proof through pilots in your environment. Ask for false-positive and false-negative data, clarity on failure modes, and evidence the tool reduces risk or effort – not just produces nicer summaries.

Pay close attention to data handling. Know what data the tool ingests, where it goes, who can access it, and whether it’s used to train models. In government, defence, and CNI settings, a helpful AI assistant can quietly become an unapproved data export mechanism if you’re not strict.

Accountability and auditability matter too. If a tool recommends or takes action, you must be able to explain why – well enough to satisfy audit, regulators, or customers. Otherwise, you’re trading security risk for governance risk.

Human oversight is essential. Automation fails at machine speed. The safest pattern is gradual: read-only, then suggest, then act with approval, and only automate fully where confidence is high and blast radius is low. Good vendors help you design those guardrails.

Finally, have an exit plan before you sign. Ensure you can extract your data, avoid proprietary black boxes, and revert to previous processes without a six-month rescue project. Don’t create a single point of failure where monitoring or response depends entirely on one vendor’s opaque model.

In short: prove value, control the data, keep decisions explainable, put humans in the loop until trust is earned, ensure the tool fits how you actually operate, and make sure you can walk away cleanly if the magic turns into mess.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Save up to ,000 on Samsung’s best TVs before the big game Save up to $6,000 on Samsung’s best TVs before the big game
Next Article ByteDance launches Seed Edge for AI innovation, aiming for AGI · TechNode ByteDance launches Seed Edge for AI innovation, aiming for AGI · TechNode
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Midea president bans performative overtime and excessive PPT use within the company · TechNode
Midea president bans performative overtime and excessive PPT use within the company · TechNode
Computing
Nvidia buys B worth of CoreWeave’s stock to accelerate AI factory buildout –  News
Nvidia buys $2B worth of CoreWeave’s stock to accelerate AI factory buildout – News
News
Google Photos fixes the most frustrating part of its photo-to-video tool
Google Photos fixes the most frustrating part of its photo-to-video tool
News
Unitree named robot partner for 2026 Spring Festival Gala · TechNode
Unitree named robot partner for 2026 Spring Festival Gala · TechNode
Computing

You Might also Like

Nvidia buys B worth of CoreWeave’s stock to accelerate AI factory buildout –  News
News

Nvidia buys $2B worth of CoreWeave’s stock to accelerate AI factory buildout – News

5 Min Read
Google Photos fixes the most frustrating part of its photo-to-video tool
News

Google Photos fixes the most frustrating part of its photo-to-video tool

3 Min Read
WatchOS 26.2.1 Brings AirTag 2nd Gen Precision Finding to Apple Watch
News

WatchOS 26.2.1 Brings AirTag 2nd Gen Precision Finding to Apple Watch

4 Min Read
Is TikTok Down? New US Owners Blame App Trouble on Data Center Power Outage
News

Is TikTok Down? New US Owners Blame App Trouble on Data Center Power Outage

6 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?