Meta has refused to sign the European Commission’s new AI Code of Practice, escalating tensions with Brussels just weeks before landmark regulation kicks in.
The voluntary guidelines – designed to help tech giants prepare for the EU’s sweeping AI Act – were rejected outright by Meta’s global affairs chief Joel Kaplan, who warned they introduced “legal uncertainties” and “go far beyond the scope of the AI Act”.
“Europe is heading down the wrong path on AI,” Kaplan wrote on LinkedIn. “We share concerns raised by other businesses that this overreach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.”
Silicon Valley vs Brussels
The Commission’s code – which includes guidance on transparency, safety, and copyright – has been signed by Meta rivals OpenAI and Mistral, and Microsoft is reportedly preparing to follow suit.
But Meta’s refusal reflects deeper frustration in Silicon Valley, where concerns are mounting that EU regulation is evolving faster than many companies can keep up.
Over 110 firms including Airbus, ASML, and Siemens recently signed a letter to EU president Ursula von der Leyen calling for a two-year “clock stop” on enforcement of the AI Act.
The European Commission says the code offers a “stepping stone” to compliance, particularly for developers of general-purpose AI like Meta’s Llama models.
But critics argue it adds complexity and introduces new risks, particularly around copyright obligations and dataset disclosures.
Kaplan warned it could “throttle” innovation in Europe – especially for smaller firms without Meta’s compliance resources.
Behind Meta’s AI playbook
The rejection comes as Meta doubles down on its own AI ambitions.
It is aggressively hiring top talent in a bid to challenge OpenAI and Google DeepMind, reportedly offering signing bonuses of up to $100,000 and pursuing a multi-billion-dollar deal for AI startup Scale.
The firm has earmarked $65bn for AI infrastructure this year, and insiders say it’s shifting from research to deployment – building tools ready to roll out across its platforms, including WhatsApp and Instagram.
While Mark Zuckerberg has publicly championed “open and accessible” AI, critics say Meta’s open-source strategy has been more calculated.
Llama, its flagship large language model, was released under a restrictive licence that allowed use – but not competition – drawing accusations of “open source washing”.
Now, as Meta begins to commercialise Llama and dials down its openness, the decision to reject the EU’s code could be part of a broader push to shape AI governance on its own terms.
What it means for UK tech
For UK startups, the transatlantic regulatory split matters. Compliance costs and legal uncertainty could deter global AI firms from expanding in Europe – including the UK, which is set to diverge further post-Brexit.
With the Labour government pledging to make Britain “the best place in the world to build an AI business”, pressure is mounting to strike a balance between safety and competitiveness.
As the US tech giant pivots from experimentation to monetisation, Meta is drawing a line in the sand, whilst daring regulators to cross it.