ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
19 November 2025
Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.
The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive