By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: AI Agents Don’t Have Identities and That’s a Security Crisis | HackerNoon
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > AI Agents Don’t Have Identities and That’s a Security Crisis | HackerNoon
Computing

AI Agents Don’t Have Identities and That’s a Security Crisis | HackerNoon

News Room
Last updated: 2026/03/16 at 11:30 PM
News Room Published 16 March 2026
Share
AI Agents Don’t Have Identities and That’s a Security Crisis | HackerNoon
SHARE

We’re deploying autonomous systems that can read files, call APIs, send messages, and execute code — and we can’t answer the most basic question: who is this agent?


In September 2025, the Cloud Security Alliance surveyed 285 IT and security professionals about how their organizations manage AI agents. One finding stood out: only 21.9% treat AI agents as independent, identity-bearing entities within their security model. The rest? Shared service accounts, static API keys, username-password pairs.

Meanwhile, 80.9% of technical teams have already moved AI agents into active testing or production.

We are deploying autonomous systems at scale and authenticating them with the security equivalent of a Post-it note on a monitor.

The three identity models that don’t fit

Every identity system we have was built for one of three actors: humans, services, or bots. AI agents are none of these.

Humans authenticate interactively. OAuth 2.0, OIDC, passkeys — these flows assume a person is present to consent, to approve scopes, to respond to MFA challenges. An AI agent operates autonomously. There’s no human in the loop at execution time to approve each action.

Services authenticate with static credentials. API keys, client certificates, service accounts. These work because services are deterministic: a payment microservice always does the same thing with the same permissions. An AI agent is non-deterministic. The same agent, given the same tool, may take different actions depending on its context, prompt, conversation history, and the model’s probabilistic output. A static permission set either over-privileges the agent for most tasks or under-privileges it for some.

Bots are automated scripts with narrow, predefined behavior. A CI bot runs a pipeline. A Slack bot responds to commands. Their actions are enumerable. An AI agent’s actions are not — they emerge from the interaction between the model, the user’s request, available tools, and the agent’s context window.

The result: organizations default to whatever’s easiest. The Gravitee State of AI Agent Security 2026 report, surveying over 900 practitioners, found that 45.6% use shared API keys for agent-to-agent authentication. Only 17.8% use mTLS.

The delegation problem

When a human asks an AI agent to “summarize my recent emails,” a chain of delegation begins that has no proper security model.

The user delegates authority to the agent. The agent connects to an MCP server that provides email access. The MCP server calls an API on the user’s behalf. At each hop, a trust decision is made — and none of them are formally scoped.

OAuth 2.0 has a concept of delegation: a user authorizes a client application to act on their behalf, with specific scopes. But an AI agent doesn’t fit the OAuth client model cleanly. It’s not a fixed application with a registered redirect URI and a static set of scopes. It’s a runtime that decides which tools to call, with which parameters, based on a natural language conversation.

The IETF recognized this gap. In May 2025, an Internet-Draft proposed a new OAuth 2.0 grant type specifically for AI agents: urn:ietf:params:oauth:grant-type:agent-authorization_code. The flow works like this:

  1. The client application initiates an authorization request with a requested_agent parameter
  2. The user authenticates and explicitly consents to the agent acting on their behalf
  3. An authorization code is issued
  4. The agent exchanges this code — along with its own agent token and a PKCE verifier — for a delegated access token
  5. The resulting JWT contains both the user’s identity (sub) and the agent’s identity (act), creating an auditable delegation chain

This is a meaningful step. But it’s a draft. Not a standard. Not implemented in any production MCP client today.

The multi-hop trust chain

The delegation problem compounds when agents interact with multiple tools and services. Consider a realistic setup:

User → Agent → MCP Server A (file system) → reads config
                                            → finds database URL
             → MCP Server B (database)     → queries with URL from Server A
             → MCP Server C (email)        → sends results

Three MCP servers. Each one was independently authorized. But the agent’s behavior across them creates an emergent permission set that nobody approved. The agent can read a file, extract a database connection string from it, query the database, and email the results — a data exfiltration pipeline assembled from individually benign tools.

No MCP server in this chain knows about the others. Server C (email) doesn’t know that the data it’s sending came from Server B (database), which got its connection string from Server A (file system). The agent is the only entity with full context, and it has no mechanism to enforce boundaries between tool interactions.

This is the cross-origin problem that web browsers solved with the Same-Origin Policy and CORS. In the browser model, a script from origin A cannot freely access resources from origin B. In the MCP model, every tool from every server shares the same execution context — the agent — with no isolation between them.

What happens without agent identity

The consequences of this identity gap are already visible.

You can’t audit agent actions. When an agent sends an email, modifies a database, or deploys code — who is responsible? The CSA survey found that only 28% of organizations can reliably trace agent actions back to human sponsors across all environments. Nearly 80% cannot determine in real-time what their autonomous AI systems are doing.

You can’t detect compromised agents. If a tool poisoning attack modifies an agent’s behavior — through a manipulated tool description or a compromised MCP server — how would you know? Without unique agent identity, there’s no behavioral baseline to compare against. The Gravitee report found that more than 50% of agents operate without security oversight or logging.

You can’t enforce least privilege. The principle of least privilege requires knowing who’s requesting what. If the agent is authenticated as a shared service account, every agent gets the same permissions. There’s no way to scope access to “this specific agent, running this specific task, on behalf of this specific user.”

You can’t comply with regulations. Auditors need to trace decisions to accountable entities. A trading agent that makes autonomous financial decisions at 3 AM needs an identity that answers: which agent instance, running which model version, with which prompt and tools, made this specific decision? Generic service account credentials can’t answer any of these questions.

What agent identity should look like

The building blocks exist. They’re just not connected yet.

SPIFFE (Secure Production Identity Framework for Everyone) provides workload identity without static credentials. SPIFFE IDs are URIs tied to workloads, not humans, and they support short-lived, automatically rotating credentials. HashiCorp has explicitly positioned SPIFFE as a framework for “securing the identity of agentic AI and non-human actors.”

But current SPIFFE deployments in Kubernetes assign identical identities to all replicas of a deployment. For stateless microservices, that’s fine. For AI agents, it’s not — because two instances of the same agent deployment can behave entirely differently based on their context. Each agent instance needs a unique identity:

spiffe://org.example/ns/production/agent/research-agent/instance/a1b2c3

Not just “research-agent,” but this specific instance, with its specific context, tools, and session.

The IETF agent delegation draft introduces the right token structure. A JWT with sub (user), act (agent), and azp (client) creates an auditable chain: “Agent X performed action Y on behalf of User Z through Client W.” This is the minimum information needed for meaningful audit trails.

Google’s A2A protocol adds agent-to-agent identity. Each agent publishes an agent.json file listing its capabilities and supported authentication flows. A2A v0.3 (July 2025) added signed security cards, which are verifiable claims about an agent’s identity and capabilities. When Agent A calls Agent B, both can verify each other’s identity and scope.

NIST is now actively working on this. On February 17, 2026, NIST’s Center for AI Standards and Innovation launched the AI Agent Standards Initiative, specifically covering agent identity and authorization. Their concept paper, “Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization,” is collecting stakeholder input through April 2, 2026.

What a complete agent identity stack looks like

Putting these building blocks together, here is what a production agent identity architecture requires:

Layer 1: Instance identity. Each agent instance gets a unique, cryptographically verifiable identity — via SPIFFE, or a similar workload identity framework. This identity is short-lived and automatically rotated. It answers: “which specific agent process is this?”

Layer 2: Delegated authorization. When an agent acts on behalf of a user, the delegation is explicit and scoped. The IETF agent authorization flow (or similar) produces tokens that bind user identity, agent identity, and permitted actions. It answers: “what is this agent allowed to do, and on whose behalf?”

Layer 3: Tool-level policy. Each tool invocation is authorized individually, not just the connection to the MCP server. Connecting to a database server doesn’t grant blanket query access — the agent is authorized for specific operations. It answers: “is this specific action, with these specific parameters, permitted?”

Layer 4: Cross-agent trust. When agents communicate (via A2A or similar protocols), both parties verify identity and scope. Agent A can verify that Agent B is who it claims to be and is authorized for the interaction. It answers: “is the agent making this request trustworthy for this specific interaction?”

Layer 5: Continuous audit. Every action is logged with full context: agent instance ID, user delegation, tool called, parameters passed, result returned. Not monthly reviews — real-time monitoring with anomaly detection. It answers: “what happened, and does it match expected behavior?”

No production deployment today implements all five layers. Most implement zero.

The gap between now and then

The industry data paints a clear picture of where we are:

| What’s needed | Current state | Source |
|—-|—-|—-|
| Agents as independent identities | 21.9% of orgs do this | CSA Survey, Oct 2025 |
| Real-time agent inventory | 21% maintain one | CSA Survey, Oct 2025 |
| Secure auth (mTLS) for agent-to-agent | 17.8% use it | Gravitee Report, 2026 |
| Full security approval for agents | 14.4% have it | Gravitee Report, 2026 |
| Traceable agent actions to human sponsors | 28% can do this | CSA Survey, Oct 2025 |
| Agents monitored and secured | 47.1% of orgs | Gravitee Report, 2026 |

Meanwhile, 82% of executives feel confident their existing policies protect them from unauthorized agent actions. The gap between that confidence and the data above is where breaches happen.

What you can do today

The full stack described above is where the industry needs to go. But you don’t need to wait for NIST or the IETF to ship before improving your posture.

Stop using shared credentials for agents. If every agent authenticates with the same API key, you have no identity — you have a shared password. Issue per-agent credentials, even if it’s just unique API keys as an interim step.

Log everything at the tool level. Don’t just log that an agent connected to a server. Log every tool call, every parameter, every response. This is the audit trail you’ll need when something goes wrong.

Scope tool access per agent. Not every agent needs every tool. If an agent’s job is to summarize documents, it doesn’t need access to email sending or code deployment tools. Enforce this at the MCP server level or through a gateway.

Pin your MCP server versions. An agent’s identity is meaningless if the tools it connects to can change underneath it. Pin versions, hash tool definitions, and alert on changes.

Treat agent security like infrastructure security. AI agents are infrastructure now. They deserve the same rigor as your production services: unique identities, scoped permissions, monitored access, and incident response procedures.

The standards are coming. NIST is working on it. The IETF is drafting extensions. SPIFFE is being adapted. But your agents are running in production today, and the identity gap exists right now.

The best time to close it was before deployment. The second best time is now.


References:

  • Cloud Security Alliance, “AI Agent Identity Crisis” survey, September–October 2025 (n=285)

  • Gravitee, “State of AI Agent Security 2026” report (n=900+)

  • IETF Internet-Draft, “OAuth 2.0 Extension: On-Behalf-Of User Authorization for AI Agents,” T. S. Senarath, May 2025

  • NIST AI Agent Standards Initiative announcement, February 17, 2026

  • HashiCorp, “SPIFFE: Securing the Identity of Agentic AI and Non-Human Actors”

  • Solo.io, “Agent Identity and Access Management — Can SPIFFE Work?”

  • Google A2A Protocol specification, v0.3, July 2025

  • MCP Authorization Specification, 2025-03-26 and 2025-11-25 revisions

    n

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article What The Bottom Port On Your Xbox Controller Is For – BGR What The Bottom Port On Your Xbox Controller Is For – BGR
Next Article Teens launch lawsuit against xAI over Grok deepfakes –  News Teens launch lawsuit against xAI over Grok deepfakes – News
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

Today's NYT Connections: Sports Edition Hints, Answers for March 17 #540
Today's NYT Connections: Sports Edition Hints, Answers for March 17 #540
News
JD.com launches Joybuy in Europe, emphasizing same-day delivery · TechNode
JD.com launches Joybuy in Europe, emphasizing same-day delivery · TechNode
Computing
Don’t Miss the Hype: Where to Stream Every 2026 Oscar Winner and Nominee
Don’t Miss the Hype: Where to Stream Every 2026 Oscar Winner and Nominee
News
Building Reliable AI Systems with AI Observability | HackerNoon
Building Reliable AI Systems with AI Observability | HackerNoon
Computing

You Might also Like

JD.com launches Joybuy in Europe, emphasizing same-day delivery · TechNode
Computing

JD.com launches Joybuy in Europe, emphasizing same-day delivery · TechNode

1 Min Read
Building Reliable AI Systems with AI Observability | HackerNoon
Computing

Building Reliable AI Systems with AI Observability | HackerNoon

14 Min Read
Mercedes-Benz to develop long-range PHEV in China: report · TechNode
Computing

Mercedes-Benz to develop long-range PHEV in China: report · TechNode

1 Min Read
The Ultimate Guide to CrewAI: Building Autonomous AI Agents in 2026 – Chat GPT AI Hub
Computing

The Ultimate Guide to CrewAI: Building Autonomous AI Agents in 2026 – Chat GPT AI Hub

0 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?