The ETSI EN 304 223 standard introduces baseline security requirements for AI that companies must integrate into their governance frameworks.
As organizations incorporate machine learning into their core operations, this European Standard (EN) establishes specific provisions to protect AI models and systems. It is the first globally applicable European standard for AI cybersecurity and has received formal approval from national standards bodies to strengthen its authority across international markets.
This standard will serve as a necessary benchmark alongside EU AI legislation. This addresses the reality that AI systems have specific risks that are often overlooked by traditional software security measures, such as data poisoning, model obfuscation, and vulnerability to indirect prompt injection. The standard covers everything from deep neural networks and generative AI to basic predictive systems, and explicitly excludes only those used solely for academic research.
ETSI standard clarifies chain of responsibility for AI security
A persistent hurdle in enterprise AI adoption is deciding who takes the risk. The ETSI standard solves this problem by defining three main technical roles: developer, system operator, and data administrator.
For many companies, these lines are blurred. Financial services companies that fine-tune open source models for fraud detection are counted as both developers and system operators. This dual status imposes strict obligations that require companies to secure their deployment infrastructure while documenting the provenance of training data and model design audits.
Including “data controllers” as a distinct stakeholder group has direct implications for chief data analytics officers (CDAOs). These entities control data permissions and integrity. This role currently has explicit security responsibilities. Administrators must ensure that the intended use of the system matches the sensitivity of the training data, effectively placing security gatekeepers within the data management workflow.
ETSI’s AI standards make it clear that security cannot be added as an afterthought during the deployment phase. During the design phase, organizations should conduct threat modeling to address AI-native attacks such as membership inference and model obfuscation.
One clause requires developers to limit functionality to reduce the attack surface. For example, if a system uses a multimodal model but only requires text processing, unused modalities (such as image or audio processing) represent risks that need to be managed. This requirement forces technology leaders to rethink the common practice of deploying large, general-purpose foundation models where smaller, specialized models are sufficient.
The document also calls for stricter asset management. Developers and system operators must maintain a comprehensive inventory of assets, including interdependencies and connectivity. This supports shadow AI detection. IT leaders can’t secure models they don’t know exist. The standard also mandates the creation of specific disaster recovery plans tailored to AI attacks, ensuring that a “known good state” can be restored even if a model is compromised.
Supply chain security is a pressing friction point for companies that rely on third-party vendors and open source repositories. The ETSI standard requires that if a system operator chooses to use an AI model or component that is not well-documented, it must justify that decision and document the associated security risks.
In fact, procurement teams can no longer accept “black box” solutions. Developers must provide cryptographic hashes to verify the authenticity of model components. If training data is publicly obtained (a common practice for large-scale language models), developers must document the source URL and retrieval timestamp. This audit trail is necessary for post-incident investigations, especially when trying to determine whether the model was affected by data poisoning during the training phase.
When companies provide APIs to external customers, they must apply controls designed to mitigate AI-focused attacks, such as rate limiting to prevent overwhelming defenses against attackers reverse engineering models or injecting harmful data.
The lifecycle approach extends to the maintenance phase, where the standard treats major updates, such as retraining with new data, as deploying a new version. In the ETSI AI standard, this triggers new security testing and assessment requirements.
Continuous monitoring has also been formalized. In addition to uptime, system operators must analyze logs to detect “data drift” or gradual changes in behavior that could indicate a security breach. This moves AI monitoring from a performance metric to a security discipline.
The standard also addresses the “end of life” phase. When models are retired or transferred, organizations must engage data custodians to securely dispose of data and configuration details. This provision prevents sensitive intellectual property and training data from being leaked through discarded hardware or forgotten cloud instances.
Management oversight and governance
Compliance with ETSI EN 304 223 requires a review of existing cybersecurity training programs. The standard requires training to be tailored to specific roles, helping developers understand securely coding AI, while also helping general staff recognize threats such as social engineering through AI output.
“ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for protecting AI systems,” said Scott Cadzow, Chair of ETSI’s Artificial Intelligence Protection Technology Committee.
“At a time when AI is increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated. The work that has gone into delivering this framework is the result of extensive collaboration and means that organizations can have full confidence in AI systems that are resilient, reliable and secure by design.”
Implementing these baselines into ETSI’s AI security standards provides a structure for more secure innovation. With documented audit trails, clear role definitions, and enhanced supply chain transparency, companies can reduce the risks associated with AI deployments while establishing a defensible position against future regulatory audits.
An upcoming technical report (ETSI TR 104 159) will apply these principles specifically to generative AI, targeting issues such as deepfakes and disinformation.
See also: Alistair Frost: Addressing employee concerns for successful AI integration

Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events. Click here for more information.
AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

