By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
World of SoftwareWorld of SoftwareWorld of Software
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Search
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
Reading: ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
Share
Sign In
Notification Show More
Font ResizerAa
World of SoftwareWorld of Software
Font ResizerAa
  • Software
  • Mobile
  • Computing
  • Gadget
  • Gaming
  • Videos
Search
  • News
  • Software
  • Mobile
  • Computing
  • Gaming
  • Videos
  • More
    • Gadget
    • Web Stories
    • Trending
    • Press Release
Have an existing account? Sign In
Follow US
  • Privacy
  • Terms
  • Advertise
  • Contact
Copyright © All Rights Reserved. World of Software.
World of Software > Computing > ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
Computing

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

News Room
Last updated: 2025/11/19 at 5:57 AM
News Room Published 19 November 2025
Share
ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
SHARE

Nov 19, 2025Ravie LakshmananAI Security / SaaS Security

Malicious actors can exploit default configurations in ServiceNow’s Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.

The second-order prompt injection, according to AppOmni, makes use of Now Assist’s agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive corporate data, modify records, and escalate privileges.

“This discovery is alarming because it isn’t a bug in the AI; it’s expected behavior as defined by certain default configuration options,” said Aaron Costello, chief of SaaS Security Research at AppOmni.

“When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems. These settings are easy to overlook.”

DFIR Retainer Services

The attack is made possible because of agent discovery and agent-to-agent collaboration capabilities within ServiceNow’s Now Assist. With Now Assist offering the ability to automate functions such as help-desk operations, the scenario opens the door to possible security risks.

For instance, a benign agent can parse specially crafted prompts embedded into content it’s allowed access to and recruit a more potent agent to read or change records, copy sensitive data, or send emails, even when built-in prompt injection protections are enabled.

The most significant aspect of this attack is that the actions unfold behind the scenes, unbeknownst to the victim organization. At its core, the cross-agent communication is enabled by controllable configuration settings, including the default LLM to use, tool setup options, and channel-specific defaults where the agents are deployed –

  • The underlying large language model (LLM) must support agent discovery (both Azure OpenAI LLM and Now LLM, which is the default choice, support the feature)
  • Now Assist agents are automatically grouped into the same team by default to invoke each other
  • An agent is marked as being discoverable by default when published

While these defaults can be useful to facilitate communication between agents, the architecture can be susceptible to prompt injections when an agent whose main task is to read data that’s not inserted by the user invoking the agent.

“Through second-order prompt injection, an attacker can redirect a benign task assigned to an innocuous agent into something far more harmful by employing the utility and functionality of other agents on its team,” AppOmni said.

CIS Build Kits

“Critically, Now Assist agents run with the privilege of the user who started the interaction unless otherwise configured, and not the privilege of the user who created the malicious prompt and inserted it into a field.”

Following responsible disclosure, ServiceNow said the behavior is intended to be this way, but the company has since updated its documentation to provide more clarity on the matter. The findings demonstrate the need for strengthening AI agent protection, as enterprises increasingly incorporate AI capabilities into their workflows.

To mitigate such prompt injection threats, it’s advised to configure supervised execution mode for privileged agents, disable the autonomous override property (“sn_aia.enable_usecase_tool_execution_mode_override”), segment agent duties by team, and monitor AI agents for suspicious behavior.

“If organizations using Now Assist’s AI agents aren’t closely examining their configurations, they’re likely already at risk,” Costello added.

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Print
Share
What do you think?
Love0
Sad0
Happy0
Sleepy0
Angry0
Dead0
Wink0
Previous Article Gemini 3 is Google’s most advanced AI model yet Gemini 3 is Google’s most advanced AI model yet
Next Article We Pit These Two Weird-Looking Android Gaming Phones Head to Head We Pit These Two Weird-Looking Android Gaming Phones Head to Head
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

248.1k Like
69.1k Follow
134k Pin
54.3k Follow

Latest News

No Time to Read a Long Google Doc? Try Gemini’s Quick AI Audio Summaries
No Time to Read a Long Google Doc? Try Gemini’s Quick AI Audio Summaries
News
Top Strategies for Scaling Professional Teams in 2026
Top Strategies for Scaling Professional Teams in 2026
Gadget
Stripe’s x402 Turned Bitcoin’s Micropayments Dream Into a Bot Economy | HackerNoon
Stripe’s x402 Turned Bitcoin’s Micropayments Dream Into a Bot Economy | HackerNoon
Computing
Sky adding HBO Max is great, but I do wish it made 4K the standard
Sky adding HBO Max is great, but I do wish it made 4K the standard
Gadget

You Might also Like

Stripe’s x402 Turned Bitcoin’s Micropayments Dream Into a Bot Economy | HackerNoon
Computing

Stripe’s x402 Turned Bitcoin’s Micropayments Dream Into a Bot Economy | HackerNoon

24 Min Read
The HackerNoon Newsletter: AI Exposes the Fragility of Good Enough Data Operations (2/15/2026) | HackerNoon
Computing

The HackerNoon Newsletter: AI Exposes the Fragility of Good Enough Data Operations (2/15/2026) | HackerNoon

3 Min Read
Week in Review: Most popular stories on GeekWire for the week of Feb. 8, 2026
Computing

Week in Review: Most popular stories on GeekWire for the week of Feb. 8, 2026

3 Min Read
Pinterest Trends Strategy: How to Use Search Momentum to Drive Traffic
Computing

Pinterest Trends Strategy: How to Use Search Momentum to Drive Traffic

3 Min Read
//

World of Software is your one-stop website for the latest tech news and updates, follow us now to get the news that matters to you.

Quick Link

  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact

Topics

  • Computing
  • Software
  • Press Release
  • Trending

Sign Up for Our Newsletter

Subscribe to our newsletter to get our newest articles instantly!

World of SoftwareWorld of Software
Follow US
Copyright © All Rights Reserved. World of Software.
Welcome Back!

Sign in to your account

Lost your password?