The policy sets baseline cybersecurity requirements for AI models and systems and applies to developers, operators, and data custodians across the AI lifecycle.
The European Telecommunications Standards Institute (ETSI) published ETSI EN 304 223 V2.1.1, a European Standard that defines baseline cybersecurity requirements for artificial intelligence (AI) models and systems. The standard was adopted as an official European standard (EN) and addresses security across the AI lifecycle, including design, development, deployment, maintenance, and end-of-life.
ETSI EN 304 223 V2.1.1, titled Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems, replaces earlier technical specifications developed by ETSI’s Technical Committee on Securing Artificial Intelligence (SAI). The document defines high-level principles and provisions that stakeholders, including AI developers, system operators, and data custodians, are expected to follow to protect AI systems from cybersecurity threats.
The standard applies to AI systems that include deep neural networks and other machine learning technologies and covers requirements intended to help safeguard AI models against risks such as data poisoning and adversarial attacks. It outlines baseline controls for secure design, secure development practices, secure deployment, ongoing maintenance, and secure retirement of AI systems.
“This European Standard provides a clear set of baseline cybersecurity requirements to help protect AI systems,” ETSI stated in its release announcing the publication of ETSI EN 304 223.
ETSI’s publication follows a multi-stage standardisation process, with the latest version now formally available in the ETSI deliverables repository. The standard’s adoption as an EN means it will be transposed by national standardisation bodies participating in the European standardisation system, with dates for national adoption and withdrawal of conflicting national standards noted in the document.