By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: Securing AI to Benefit from AI
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > Securing AI to Benefit from AI
Computing

Securing AI to Benefit from AI

News Room
Last updated: 2025/10/21 at 7:17 AM
News Room Published 21 October 2025
Share
SHARE

Artificial intelligence (AI) holds tremendous promise for improving cyber defense and making the lives of security practitioners easier. It can help teams cut through alert fatigue, spot patterns faster, and bring a level of scale that human analysts alone can’t match. But realizing that potential depends on securing the systems that make it possible.

Every organization experimenting with AI in security operations is, knowingly or not, expanding its attack surface. Without clear governance, strong identity controls, and visibility into how AI makes its decisions, even well-intentioned deployments can create risk faster than they reduce it. To truly benefit from AI, defenders need to approach securing it with the same rigor they apply to any other critical system. That means establishing trust in the data it learns from, accountability for the actions it takes, and oversight for the outcomes it produces. When secured correctly, AI can amplify human capability instead of replacing it to help practitioners work smarter, respond faster, and defend more effectively.

Establishing Trust for Agentic AI Systems

As organizations begin to integrate AI into defensive workflows, identity security becomes the foundation for trust. Every model, script, or autonomous agent operating in a production environment now represents a new identity — one capable of accessing data, issuing commands, and influencing defensive outcomes. If those identities aren’t properly governed, the tools meant to strengthen security can quietly become sources of risk.

The emergence of Agentic AI systems make this especially important. These systems don’t just analyze; they may act without human intervention. They triage alerts, enrich context, or trigger response playbooks under delegated authority from human operators. Each action is, in effect, a transaction of trust. That trust must be bound to identity, authenticated through policy, and auditable end to end.

The same principles that secure people and services must now apply to AI agents:

  • Scoped credentials and least privilege to ensure every model or agent can access only the data and functions required for its task.
  • Strong authentication and key rotation to prevent impersonation or credential leakage.
  • Activity provenance and audit logging so every AI-initiated action can be traced, validated, and reversed if necessary.
  • Segmentation and isolation to prevent cross-agent access, ensuring that one compromised process cannot influence others.

In practice, this means treating every agentic AI system as a first-class identity within your IAM framework. Each should have a defined owner, lifecycle policy, and monitoring scope just like any user or service account. Defensive teams should continuously verify what those agents can do, not just what they were intended to do, because capability often drifts faster than design. With identity established as the foundation, defenders can then turn their attention to securing the broader system.

Securing AI: Best Practices for Success

Securing AI begins with protecting the systems that make it possible — the models, data pipelines, and integrations now woven into everyday security operations. Just as

we secure networks and endpoints, AI systems must be treated as mission-critical infrastructure that requires layered and continuous defense.

The SANS Secure AI Blueprint outlines a Protect AI track that provides a clear starting point. Built on the SANS Critical AI Security Guidelines, the blueprint defines six control domains that translate directly into practice:

  • Access Controls: Apply least privilege and strong authentication to every model, dataset, and API. Log and review access continuously to prevent unauthorized use.
  • Data Controls: Validate, sanitize, and classify all data used for training, augmentation, or inference. Secure storage and lineage tracking reduce the risk of model poisoning or data leakage.
  • Deployment Strategies: Harden AI pipelines and environments with sandboxing, CI/CD gating, and red-teaming before release. Treat deployment as a controlled, auditable event, not an experiment.
  • Inference Security: Protect models from prompt injection and misuse by enforcing input/output validation, guardrails, and escalation paths for high-impact actions.
  • Monitoring: Continuously observe model behavior and output for drift, anomalies, and signs of compromise. Effective telemetry allows defenders to detect manipulation before it spreads.
  • Model Security: Version, sign, and integrity-check models throughout their lifecycle to ensure authenticity and prevent unauthorized swaps or retraining.

These controls align directly NIST’s AI Risk Management Framework and the OWASP Top 10 for LLMs, which highlights the most common and consequential vulnerabilities in AI systems — from prompt injection and insecure plugin integrations to model poisoning and data exposure. Applying mitigations from those frameworks inside these six domains helps translate guidance into operational defense. Once these foundations are in place, teams can focus on using AI responsibly by knowing when to trust automation and when to keep humans in the loop.

Balancing Augmentation and Automation

AI systems are capable of assisting human practitioners like an intern that never sleeps. However, it is critical for security teams to differentiate what to automate from what to augment. Some tasks benefit from full automation, especially those that are repeatable, measurable, and low-risk if an error occurs. However, others demand direct human oversight because context, intuition, or ethics matter more than speed.

Threat enrichment, log parsing, and alert deduplication are prime candidates for automation. These are data-heavy, pattern-driven processes where consistency outperforms creativity. By contrast, incident scoping, attribution, and response decisions rely on context that AI cannot fully grasp. Here, AI should assist by surfacing indicators, suggesting next steps, or summarizing findings while practitioners retain decision authority.

Finding that balance requires maturity in process design. Security teams should categorize workflows by their tolerance for error and the cost of automation failure. Wherever the risk of false positives or missed nuance is high, keep humans in the loop. Wherever precision can be objectively measured, let AI accelerate the work.

Join us at SANS Surge 2026!

I’ll dive deeper into this topic during my keynote at SANS Surge 2026 (Feb. 23-28, 2026), where we’ll explore how security teams can ensure AI systems are safe to depend on. If your organization is moving fast on AI adoption, this event will help you move more securely. Join us to connect with peers, learn from experts, and see what secure AI in practice really looks like.

Register for SANS Surge 2026 here.

Note: This article was contributed by Frank Kim, SANS Institute Fellow.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Luminkey’s Magger68 Plus Bridges the Hall Effect and Mechanical Keyboard World
Next Article Your iPhone or Mac May Soon Let You Pick How Transparent Liquid Glass Is
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Video Is your teen using AI companions? What are the risks?
News
Cards Against Humanity: We Forced SpaceX to Get Off Our Land
News
51job to IPO in Hong Kong in first half of 2025, with over 200 million users · TechNode
Computing
The iOS 26.1 beta 4 has arrived. Here’s what’s new.
News

You Might also Like

Computing

51job to IPO in Hong Kong in first half of 2025, with over 200 million users · TechNode

1 Min Read
Computing

Meet True: HackerNoon Company of the Week | HackerNoon

4 Min Read
Computing

China’s official meets with Elon Musk as Trump begins second term · TechNode

3 Min Read
Computing

The Dragon Hatchling Learns to Fly: Inside AI’s Next Learning Revolution | HackerNoon

31 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?