Meta Refuses to Sign EU’s New AI Code: Sparks Big Debate on Future of AI in Europe

Meta AI isn't available yet in your country
Meta AI isn’t available yet in your country (Source: Getty Images)

Meta, the parent company of Facebook and Instagram, said on Friday, July 18, 2025, that it will not sign the European Union’s new artificial intelligence (AI) code of practice. The announcement came just days after the European Commission introduced its final version of rules for companies developing general-purpose AI, such as Meta’s own Llama model and competitors like OpenAI’s ChatGPT.

Joel Kaplan, Meta’s Chief Global Affairs Officer, explained the decision in a LinkedIn post on Friday. He said Europe “is heading down the wrong path on AI.” According to Kaplan, the new code creates too many legal uncertainties for AI developers. He also said the rules go far beyond what is required under the existing AI Act, which was passed by European lawmakers in 2024.

The European Commission’s AI code of practice is voluntary, not mandatory. By signing it, companies can show they are following the right steps to comply with the AI Act. This gives them some legal certainty and may reduce how often regulators check their businesses. However, companies that do not sign, like Meta, might face stricter controls in the future.

The code requires companies to keep documentation about how their AI systems work. Companies need to update these documents regularly. They also cannot use pirated content to train their AI systems. If a content creator asks to be excluded from AI training data, the company must respect that request. Further, companies must carry out risk assessments and check their systems for safety after launching them. These rules will start on August 2, 2025, and target AI models that can be used for many different things, not just a single purpose.

Meta’s refusal comes at a time when there is much debate in Europe about the new AI rules. Earlier this month, over 40 major European businesses—including Bosch, Siemens, SAP, and Airbus—signed a letter asking the Commission to “Stop the Clock” and pause the new rules. They are concerned, like Meta, that the regulations are unclear and too strict. They believe these could slow down the development and roll-out of new AI technology in Europe, putting local companies at a disadvantage compared to those in the US or China.

Despite Meta’s criticism, other big technology companies are choosing a different path. Microsoft said it will likely sign the European code after reviewing the requirements in detail. OpenAI, the company behind ChatGPT, has confirmed its plan to support the code as well. Under the new rules, OpenAI and other developers, like Mistral, will need to share information on the data used to train their AI models and explain how they comply with EU copyright laws.

The AI Act splits different uses of AI into categories such as “unacceptable risk” (things like social scoring or manipulation) and “high risk” areas (like biometric ID or AI in hiring decisions). Developers will have to register high-risk AI systems in a special database and follow extra rules to ensure quality and manage risk.

Kaplan’s note about not signing the code echoed the views of many other companies that believe the new rules are too broad and could hold back not only technology giants but also smaller European firms. He stated that over-regulation might “throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them”.

In summary, on July 18, 2025, Meta publicly refused to sign the EU’s new AI code of practice. This decision highlights the growing divide between some US tech firms and EU policymakers over how to regulate artificial intelligence, and what rules will help or hurt innovation in Europe.

Leave a Comment