CISA, Global Cyber Agencies Publish Guidance To Secure AI Use In Critical Infrastructure
The Cybersecurity and Infrastructure Security Agency (CISA) and international partners outline four core principles for safely integrating AI into operational technology systems across critical infrastructure.
Key takeaways
- Critical-infrastructure operators are encouraged to treat AI as a high-risk technology in operational technology (OT) environments and to implement robust safety, security, and oversight measures.
- The guidance identifies four foundational principles for safe AI integration in OT: understand AI, evaluate AI use, establish governance and assurance frameworks, and embed safety and security practices into AI-enabled OT systems.
- The guidance emphasizes that AI deployment in systems that control physical infrastructure must include human-in-the-loop (HITL) oversight, fail-safe mechanisms, continuous monitoring, and incident-response planning, not simply automation.
CISA, along with the NSA and international cybersecurity agencies, released a joint guidance document, “Principles for the Secure Integration of Artificial Intelligence in Operational Technology.” The guidance is aimed at owners and operators of critical infrastructure systems that incorporate operational technology.
The document outlines four core principles designed to help institutions benefit from AI, such as machine-learning tools or AI agents, while safeguarding the safety, reliability, and security of OT environments. The four principles are:
- Understand AI: encourages OT operators to familiarize themselves with the unique risks AI poses in industrial and control-system contexts. This includes model drift, reliability issues, increased complexity, and potential safety hazards. It also calls for training personnel and adopting secure-by-design AI development lifecycles.
- Consider AI Use in the OT Domain: recommends a careful assessment of whether AI is truly the right tool for a given OT task. This includes evaluating business cases, vendor transparency, data security, and the long-term maintenance burden that AI integration might bring.
- Establish AI Governance and Assurance Frameworks: calls for organizations to build frameworks that include continuous testing, compliance checks, and regulation-aligned evaluation of AI systems before and after deployment.
- Embed Safety and Security Practices into AI-Enabled OT Systems: instructs operators to integrate HITL controls for decision-making, implement fail-safe mechanisms, monitor and log AI activity, and update incident-response plans to handle potential AI failures or malicious attacks.
The guidance applies broadly to OT systems across critical infrastructure, including power generation, water treatment, transportation, manufacturing, and other sectors that use AI for tasks such as predictive maintenance, anomaly detection, and operational optimization. It emphasizes that while some AI methods (e.g., machine learning, LLM-based agents, statistical models) vary in complexity and risk, security and safety requirements remain especially important when AI controls physical processes.
By issuing this guidance, CISA and its partners signal growing caution among global cybersecurity authorities. While AI offers promising gains in efficiency and performance, its integration into OT requires rigorous governance, transparency, and risk awareness. Otherwise, critical infrastructure could become vulnerable to failures or cyberattacks.
As adoption of AI in infrastructure deployment accelerates, operators and regulators worldwide will be watching closely to see whether organizations adhere to the new principles and whether this guidance becomes a baseline for future regulatory requirements.