High-tech Giants split into EU AI codes as compliance deadlines approach

Date:

The implementation of the EU’s general AI code of practice reveals deep divisions among major technology companies. Microsoft has called Meta’s overrestrictive regulations into guidelines, and while Meta calls guidelines to curb innovation, it states its intention to sign the European Union’s voluntary AI compliance framework.

Microsoft President Brad Smith said Reuters “I think it’s likely to sign it. You need to read the documentation,” Smith highlighted his company’s joint approach, saying, “Our goal is to find ways to be collaborative, and at the same time, one of the things we welcome is direct involvement by the AI office with the industry.”

In contrast, Joel Kaplan, Meta’s chief global affairs officer, announced on LinkedIn that “meta will not sign. The code introduces some legal uncertainty to model developers and takes measures that far beyond the scope of AI law.”

Kaplan argued that “Europe is on the wrong path of AI,” and warned that the EU AI code would “straighten the development and deployment of European frontier AI models and stunt European companies trying to build their businesses on top of it.”

Early Adapter and Hold Out

The fractured responses of the technology sector highlight a range of strategies for managing regulatory compliance in Europe. Openai and Mistral sign the code and position themselves as early adopters of a voluntary framework.

Openai announced its commitment, saying, “Signing the code reflects our commitment to providing a capable, accessible and secure AI model for Europeans to fully participate in the economic and social interests of intellectual age.”

According to industry observers tracking voluntary commitments, Openai will participate in the EU practice code for the Total Purpose AI Model.

More than 40 European companies signed a letter earlier this month, asking the Commission to suspend the implementation of AI laws.

Code requirements and timeline

The Code of Practice was published by the European Commission on July 10th and aims to provide legal certainty for companies developing general-purpose AI models starting from August 2nd, 2025, prior to mandatory enforcement.

This voluntary tool was developed by 13 independent experts with input from over 1,000 stakeholders, including model providers, small businesses, academics, AI safety experts, rights holders and civil society organizations.

The EU AI code establishes requirements in three areas: Transparency obligation requires providers to maintain technical models and dataset documentation, but copyright compliance requires a clear internal policy that outlines how training data is acquired and used in accordance with EU copyright regulations.

In cutting-edge models, safety and security obligations apply to the category “GPAI with GPAI” (GPAISR). It covers the most advanced models, such as Openai’s O3, Anthropic’s Claude 4 Opus, and Google’s Gemini 2.5 Pro.

Signatories should publish a summary of the content used to train generic AI models and implement policies to comply with EU copyright laws. This framework requires companies to document training on data sources, implement robust risk assessments, and establish a governance framework for managing potential AI systems threats.

Enforcement and penalties

Penalties for non-compliance are significant (either), including up to 35 million euros or 7% of global annual revenue. In particular, for providers of GPAI models, EC could impose fines of up to 15 million euros or 3% of annual revenue worldwide.

The committee shows that if a provider adheres to approved codes of practice, the AI office and national regulators treat it as a simplified compliance path and focus on ensuring that code commitments are met rather than conducting an audit of all AI systems. This creates incentives for early adoption among companies seeking predictability of regulations.

The EU AI code represents part of the Broaderai Act framework. Under the AI Act, the obligations of the GPAI model, detailed in Articles 50 to 55, can be enforced 12 months after the act comes into effect (August 2, 2025). Providers of GPAI models that came to the market prior to this date must comply with the AI Act by August 2, 2027.

Industry and global impact

Various responses suggest that technology companies are adopting fundamentally different strategies to manage regulatory relationships in global markets. Microsoft’s cooperative stance is in contrast to Meta’s conflicting approach, which could set precedents on how key AI developers are involved in international regulations.

Despite the rise in opposition, the Commission refused to delay. Thierry Breton, the EU’s internal market commissioner, argued that the framework would go on time, stating that the AI Act is crucial for consumer safety and trusting emerging technologies.

The current voluntary nature of the EU AI code in the early stages provides businesses with opportunities to influence regulatory development through participation. However, enforcement begins in August 2025 will ensure final compliance regardless of voluntary code adoption.

For businesses operating in multiple jurisdictions, the EU framework can affect global AI governance standards. This framework is consistent with the broader global AI governance development, including the G7 Hiroshima AI process and various national AI strategies, and could establish the European approach as an international benchmark.

Looking ahead

Directly, the content of the code will be reviewed by the EU authorities. The European Commission and Member States are assessing the validity of the Code, and it is expected that a final decision will be planned by August 2, 2025 and formally support it.

Regulatory frameworks have implications for AI development as companies must balance innovation goals with compliance obligations across multiple jurisdictions. Various company responses to voluntary code foresee potential compliance challenges as the mandatory requirements become effective.

See: Navigating EU AI ACT: Impact on UK businesses

Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber Security & Cloud Expo.

Check out other upcoming Enterprise Technology events and webinars with TechForge here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

House committees want Epstein’s “Birthday Book.”

President Donald Trump said he made it for Jeffrey...

Kentucky will become the US Ev Capital after an automatic $13 billion overhaul

Ford CEO Jim Farley on new EV production plans...

FEMA staff say Congressional Trump officials will put Katrina-level disasters at risk

"Alligator Alcatraz" may soon come to other states, Noem...

The burning man’s entrance was temporarily closed after rain

Among Burning Man attendees, it becomes a faith article...