By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: How CISOs can adapt cyber strategies for the age of AI | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > How CISOs can adapt cyber strategies for the age of AI | Computer Weekly
News

How CISOs can adapt cyber strategies for the age of AI | Computer Weekly

News Room
Last updated: 2025/08/11 at 8:27 PM
News Room Published 11 August 2025
Share
SHARE

The age of artificial intelligence, and in particular, generative AI, has arrived with remarkable speed. Enterprises are embedding AI across functions: from customer service bots and document summarisation engines to AI-driven threat detection and decision support tools.

But as adoption accelerates, CISOs are now facing a new class of digital asset in the form of the AI model, that merges intellectual property, data infrastructure, critical business logic and potential attack surface into one complex, evolving entity.

Traditional security measures may no longer be enough to cope in this new reality. In order to safeguard enterprise operations, reputation and data integrity in an AI-first world, security leaders may need to rethink their cyber security strategies.

‘Living digital assets’

First and foremost, AI systems and GenAI models should be treated as living digital assets. Unlike static data or fixed infrastructure, these models continuously evolve through retraining, fine-tuning and exposure to new prompts and data inputs.

This means that a model’s behaviour, decision-making logic and potential vulnerabilities can shift over time, often in opaque ways.

CISOs must therefore apply a mindset of continuous governance, scrutiny and adaptation. AI security is not simply a subset of data security or application security; it is its own domain requiring purpose-built governance, monitoring and incident response capabilities.

A critical step is redefining how organisations classify data within the AI lifecycle.

Traditionally, data security policies have focused on protecting structured data at rest, in transit or in use. However, with AI, model inputs, such as user prompts or retrieved knowledge and outputs, such as generated content or recommendations, must also be treated as critical assets.

Not only do these inputs and outputs carry the risk of data leakage, they can also be manipulated in ways that poison models, skew outputs or expose sensitive internal logic. Applying classification labels, access controls and audit trails across training data, inference pipelines and generated results is therefore essential to managing these risks.

Supply chain risk management

The security perimeter also expands when enterprises rely on third-party AI tools or APIs. Supply chain risk management needs a fresh lens when AI models are developed externally or sourced from open platforms.

Vendor assessments must go beyond the usual checklist of encryption standards and breach history. Instead, they should require visibility into training data sources, model update mechanisms and security testing results. CISOs should push vendors to demonstrate adherence to secure AI development practices, including bias mitigation, adversarial robustness and provenance tracking.

Without this due diligence, organisations risk importing opaque black boxes that may behave unpredictably; or worse, maliciously, under adversarial pressure.

Internally, establishing a governance framework that defines acceptable AI use is paramount. Enterprises should determine who can use AI, for what purposes and under which constraints.

These policies should be backed by technical controls, from access gating and API usage restrictions to logging and monitoring. Procurement and development teams should also adopt explainability and transparency as core requirements. More broadly, it is simply not enough for an AI system to perform well; stakeholders must understand how and why it reaches its conclusions, particularly when these conclusions influence high-stakes decisions.

Turning to zero-trust

From an infrastructure standpoint, CISOs that embed zero-trust principles into the architecture supporting AI systems will help future-proof operations.

This means segmenting development environments, enforcing least-privilege access to model weights and inference endpoints and continuously verifying both human and machine identities throughout the AI pipeline.

Many AI workloads, especially those trained on sensitive internal data, are attractive targets for espionage, insider threats and exfiltration. Identity-aware access control and real-time monitoring can help ensure that only authorised and authenticated actors can interact with critical AI resources.

AI-safe training

One of the most significant emerging vulnerabilities lies in the end-user interaction with GenAI tools. While these tools promise productivity gains and innovation, they can also become conduits for data loss, hallucinated outputs as well as the basis for social engineering. Employees may unknowingly paste sensitive information into public AI chatbots or act on flawed AI-generated advice without understanding its limitations.

CISOs should help counter this with comprehensive training programmes that go beyond generic cyber security awareness. Staff should be educated on AI-specific threats such as prompt injection attacks, model bias and synthetic identity creation. They must also be taught to verify AI outputs and avoid blind trust in machine-generated content.

Incident response

Organisations can also extend their own incident response by integrating AI threat scenarios into their incident response playbooks.

Responding to a data breach caused by prompt leakage or an AI hallucination that misinforms decision-making requires different protocols than a conventional malware incident, so tabletop exercises should be updated to include simulations of model manipulation, adversarial input attacks and the theft of AI models or training datasets, for example.

Preparedness is key: if AI systems are central to business operations, then threats to those systems must be treated with equal urgency as those targeting networks or endpoints.

Enterprise-approved platforms

In parallel, organisations should implement technical safeguards to limit the use of public GenAI tools in sensitive contexts. Whether through web filtering, browser restrictions or policy enforcement, businesses must guide employees towards enterprise-approved AI platforms that have been vetted for compliance, security and data residency. Shadow AI, or the unauthorised use of GenAI tools, poses a growing risk and must be tackled with the same rigour as shadow IT.

Insider threat

Finally, insider threat management must evolve. AI development teams often possess elevated access to sensitive datasets and proprietary model architectures.

These privileges, if abused, could lead to significant intellectual property theft or inadvertent exposure. Behavioural analytics, strong activity monitoring and enforced separation of duties are vital to reducing this risk. As AI becomes more deeply embedded into the business, the human risks surrounding its development and deployment cannot be overlooked.

In the AI era, the role of the CISO is undergoing profound change. While safeguarding systems and data are of course core to the role, now security leaders must help their organisations ensure that AI itself is trustworthy, resilient and aligned with organisational values.

This requires a shift in both mindset and strategy, recognising AI not just as a tool, but as a strategic asset that must be secured, governed and respected. Only then can enterprises harness the full potential of AI safely, confidently and responsibly.

Martin Riley is chief technology officer at Bridewell Consulting.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article MSSQL Extension for VS Code 1.34.0 Deepens Copilot Agent Mode, Adds Colour‑Coded Connections
Next Article This 16-inch Razer Blade look-a-like laptop is $1,300 at Best Buy
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Gemini is ready to start dreaming up images in Google Docs on Android
News
Report Reveals Tool Overload Driving Fatigue And Missed Threats In MSPs | HackerNoon
Computing
Final trailer for Black Myth: Wukong reveals 72 Transformations and Four Heavenly Kings · TechNode
Computing
Hols hotspot retreat busted for giving tourist TOAD POISON for ‘astral journeys’
News

You Might also Like

News

Gemini is ready to start dreaming up images in Google Docs on Android

3 Min Read
News

Hols hotspot retreat busted for giving tourist TOAD POISON for ‘astral journeys’

6 Min Read
News

Reddit says its blocking the Internet Archive to stop sneaky AI scrapers accessing its content – News

5 Min Read
News

Siri's New Features May Include Adding Voice Controls to Apps

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?