U.S. and allied cybersecurity agencies published new security guidance on accessing, monitoring, and managing the risks of agentic AI systems.
The Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), the Australian Cyber Security Centre (ACSC), and other international partners, released joint guidance detailing how organizations should secure current deployments and defend against future threats as agentic systems become more capable and widely used.
Best Practices For Securing Agentic AI Systems
Agentic AI models are more autonomous, work more freely in and across systems, and evolve faster, making them particularly vulnerable to cyber risks. Managing these risks requires addressing the design, development, deployment, and operation of agentic AI systems. The guidance recommends the following best practices:
Limit access and actions. This includes enforcing least-privilege access, where agents have only the minimal permissions required to complete specific tasks. Broad or persistent access to systems, data, or tools is discouraged due to the risk of misuse or compromise.
Execute strong identity and authentication controls. Clearly define agent identities, enforce strict credential management, and ensure that access to external tools or internal systems is verified and logged.
Monitor and track agents. Monitor and log agent actions, inputs, outputs, and system interactions in detail. Continuously review logs to detect anomalies, unintended behavior, or signs of compromise.
Intentionally limit agent capabilities. This includes limiting the scope of tasks agents can perform, restricting their ability to execute high-impact actions, and, where possible, isolating them from critical systems.
Ensure human oversight over critical systems and data. Humans should be involved in approving and/or supervising sensitive actions, particularly those affecting critical infrastructure, financial systems, or regulated data.
Implement secure integration with external tools and services. Agentic systems often rely on APIs or third-party platforms, which introduces additional risk. Organizations should validate these connections, enforce access controls, and monitor interactions closely.
Defending Against Future Risks
The guidance recommends three actions organizations need to take to adapt to evolving agentic AI threats: greater coordination among parties that use and are affected by agentic AI, focused agentic-specific testing, and cross-platform monitoring and adaptation.
Better coordination. The guidance recommends stronger coordination between security practitioners, researchers, major AI developers, and government organizations. The goal is to compile and maintain threat information on agentic AI systems, track malicious techniques, and improve shared threat prevention models.
More agent-specific testing. Existing evaluation methods may not reflect real-world deployment conditions, leaving security weaknesses undetected. Agencies recommend developing stronger evaluation methods, creating benchmark datasets for realistic use cases, using test results to identify failure points, and sharing findings across the field as per above.
Broader system-level analysis. Because agentic AI systems combine models, humans, tools, guardrails, datasets, and hardware, risks may arise from how these parts interact, not just from a single weak component. The guidance recommends using system-theoretic methods, including System Theoretic Process Analysis and Causal Analysis using System Theory, to identify security issues, assess mission risk, investigate incidents, and improve risk management across entire systems.

