Artificial Intelligence

Elon Musk’s xAI Commits to EU's AI Act Safety Chapter, Skips Other Code Provisions

Elon Musk’s xAI commits to the EU AI Code of Practice’s safety and security chapter, signaling support for AI risk measures while opting out of transparency and copyright mandates ahead of the AI Act

Elon Musk
info_icon
Summary
Summary of this article
  • xAI to sign only safety and security chapter of EU AI Code

  • Distances itself from transparency and copyright mandates

  • Voluntary compliance provides legal clarity under upcoming EU AI Act

  • Other tech giants vary: Google signs all, Meta rejects chapters

Elon Musk’s AI start‑up xAI announced on Thursday, July 31, 2025, that it will sign the European Union’s voluntary Code of Practice on artificial intelligence, but only the chapter covering safety and security, Reuters reported.

The decision signals xAI’s willingness to align with the bloc’s emerging regulatory framework while distancing itself from transparency and copyright rules it deems harmful.

The EU’s Code of Practice, drafted by 13 independent experts, comprises three pillars, transparency, copyright and safety and security, designed to help AI developers prepare for the upcoming AI Act. While transparency and copyright guidance apply broadly to all general‑purpose AI providers, the safety and security chapter targets creators of the most advanced models. By committing to these safety measures, xAI gains a degree of legal clarity under the EU’s still‑evolving regulations.

In a post on X (formerly Twitter), xAI stated, “xAI supports AI safety and will be signing the EU AI Act’s Code of Practice Chapter on Safety and Security. While the AI Act and the Code have a portion that promotes AI safety, its other parts contain requirements that are profoundly detrimental to innovation and its copyright provisions are clearly an over‑reach.” The company did not specify whether it intends to adopt the transparency or copyright chapters.

Tech Titans Take Varied Stances

Other major tech firms have signalled differing levels of commitment. Alphabet’s Google has publicly affirmed its intent to sign the entire Code of Practice and Microsoft President Brad Smith has indicated his company is “likely” to follow suit. In contrast, Meta Platforms declined to sign any chapters, arguing that the Code introduces legal uncertainties for model developers and extends beyond the scope of the forthcoming AI Act.

Although adherence to the Code is voluntary, signatories may benefit from enhanced legal certainty and a smoother transition into the binding requirements of the AI Act slated to take effect within the next two years. The Code’s safety chapter covers risk assessments, robustness testing and continuous monitoring protocols, measures aimed at preventing misuse and ensuring secure deployment of powerful AI systems.

As the AI Act advances through the EU’s legislative process, xAI’s selective embrace of the safety chapter positions the company as a pro‑safety innovator resistant to what it views as stifling transparency and copyright mandates. Whether xAI will expand its commitment to other Code chapters, or how the wider industry will respond, remains an open question as Europe prepares to enforce its landmark AI regulations.

Published At:
SUBSCRIBE
Tags

Click/Scan to Subscribe

qr-code
×