By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: From trust to turbulence: Cyber’s road ahead in 2026 | Computer Weekly
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > News > From trust to turbulence: Cyber’s road ahead in 2026 | Computer Weekly
News

From trust to turbulence: Cyber’s road ahead in 2026 | Computer Weekly

News Room
Last updated: 2025/12/04 at 6:22 PM
News Room Published 4 December 2025
Share
From trust to turbulence: Cyber’s road ahead in 2026 | Computer Weekly
SHARE

In 2025, trust became the most exploited surface in modern computing. For decades, cyber security has centered on vulnerabilities, software bugs, misconfigured systems and weak network protections. Recent incidents in cyber security marked a clear turning point, as attackers no longer needed to rely solely on traditional techniques.

This shift wasn’t subtle. Instead, it emerged across nearly every major incident: supply chain breaches leveraging trusted platforms, credential abuse across federated identity systems, misuse of legitimate remote access tools and cloud services, and AI-generated content slipping past traditional detection mechanisms. In other words, even well-configured systems could be abused if defenders assumed that trusted equals safe.

Highlighting the lessons learned in 2025 is essential for cyber security professionals to understand the evolving threat landscape and adapt strategies accordingly.

The perimeter is irrelevant – trust is the threat vector

Organisations discovered that attackers exploit assumptions just as effectively as vulnerabilities by simply borrowing trust signals that security teams overlooked. They blended into environments using standard developer tools, cloud-based services and signed binaries that were never designed with strong telemetry or behavioural controls.

The rapid growth of AI in enterprise workflows was also a contributing factor. From code generation and operations automation to business analytics and customer support, AI systems began making decisions previously made by people. This introduced a new category of risk: automation that inherits trust without validation. The result? A new class of incidents where attacks weren’t loud or obviously malicious, but were piggybacked on legitimate activity, forcing defenders to rethink what signals matter, what telemetry is missing and which behaviours should be considered sensitive even if they originate from trusted pathways.

Identity and autonomy took centre stage

Identity also defines the modern attack surface apart from security vulnerabilities. As more services, applications, AI agents and devices operate autonomously, attackers increasingly target identity systems and the trust relationships between components. Once an attacker had possession of a trusted identity, they could move with minimal friction, expanding the meaning of privilege escalation. Escalation wasn’t just about obtaining higher system permissions; it was also about leveraging an identity that others naturally trust. Considering the attacks targeting the identities, defenders realised that distrust by default must now apply not only to network traffic but also to workflows, automation and the decisions made by autonomous systems.

AI as both a power tool and a pressure point

AI acted as a defensive accelerator and a new frontier of risk. AI-powered code generation sped up development but also introduced logic flaws when models filled gaps based on incomplete instructions. AI-assisted attacks became more customised and scalable, making phishing and fraud campaigns harder to detect. Yet, the lesson wasn’t that AI is inherently unsafe; it was that AI amplifies whatever controls (or lack of controls) surround it. Without validation, AI-generated content can mislead. Without guardrails, AI agents can make risky decisions. Without observability, AI-driven automation can drift into unintended behavior. This highlights that AI security is more about the entire ecosystem, including LLMs, GenAI apps and services, AI agents and underlying infrastructure.

A shift towards governing autonomy

As organisations increase their reliance on AI agents, automation frameworks and cloud-native identity systems, security will transition from patching flaws to controlling decision-making pathways. We will see the following defensive strategies in action:

  • AI control-plane security: Security teams will establish governance layers around AI agent workflows, ensuring every automated action is authenticated, authorised, observed and reversible.  The focus will expand from guarding data to guarding behaviour.
  • Data drift protection: AI agents and automated systems will increasingly move, transform and replicate sensitive data, creating a risk of silent data sprawl, shadow datasets and unintended access paths. Without strong data lineage tracking and strict access controls, sensitive information can drift beyond approved boundaries, leading to new privacy, compliance and exposure risks.
  • Trust verification across all layers: Expect widespread adoption of “trust-minimised architectures,” where identities, AI outputs and automated decisions are continuously validated rather than implicitly accepted.
  • Zero trust as a compliance mandate: ZTA will become a regulatory requirement for critical sectors, with executives facing increased personal accountability for significant breaches tied to poor security posture.
  • Behavioural baselines for AI and automation: Just like user behaviour analytics matured for human accounts, analytics will evolve to establish expected patterns for bots, services and autonomous agents.
  • Secure-by-design identity: Identity platforms will prioritise strong lifecycle management for non-human identities, limiting the damage when automation goes wrong or is hijacked.
  • Intent-based detection: Since many attacks will continue to exploit legitimate tools, detection systems will increasingly analyse why an action occurred rather than just what happened.

If 2025 taught us that trust can be weaponised, then 2026 will teach us how to rebuild trust in a safer, more deliberate way. The future of cyber security isn’t just about securing systems but also securing the logic, identity and autonomy that drive them.

Aditya K Sood is vice president of security engineering and AI strategy at Aryaka.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Get Ready: Microsoft 365 Is About to Get More Expensive for Business Users Get Ready: Microsoft 365 Is About to Get More Expensive for Business Users
Next Article Google’s AI model is getting really good at spoofing phone photos Google’s AI model is getting really good at spoofing phone photos
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Viral rant on why ‘everyone in Seattle hates AI’ strikes a nerve, sparks debate over city’s tech vibe
Viral rant on why ‘everyone in Seattle hates AI’ strikes a nerve, sparks debate over city’s tech vibe
Computing
Grab Adds Real-Time Data Quality Monitoring to Its Platform
Grab Adds Real-Time Data Quality Monitoring to Its Platform
News
Meta Poached Apple’s Top Design Guys to Fix Its Software UI
Meta Poached Apple’s Top Design Guys to Fix Its Software UI
Gadget
Hegseth’s Signal use risked harm to US forces, watchdog says
News

You Might also Like

Grab Adds Real-Time Data Quality Monitoring to Its Platform
News

Grab Adds Real-Time Data Quality Monitoring to Its Platform

4 Min Read

Hegseth’s Signal use risked harm to US forces, watchdog says

5 Min Read
Thursday Night Football: How to Watch Cowboys vs. Lions Tonight for Free
News

Thursday Night Football: How to Watch Cowboys vs. Lions Tonight for Free

16 Min Read
Screen Protectors Without AR Coating Cancel Out iPhone 17’s Anti-Reflective Display
News

Screen Protectors Without AR Coating Cancel Out iPhone 17’s Anti-Reflective Display

8 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?