Artificial Intelligence

Anthropic Bars Companies Majority-Owned by Chinese Entities from Buying its AI Services

Anthropic will ban companies majority-owned by entities in China and other “adversarial” states from using its AI models, closing loopholes on subsidiaries and cloud access. The move underscores rising US concerns over AI’s military use

Anthropic
info_icon
Summary
Summary of this article
  • Anthropic halts services to organisations >50% owned by "adversarial" countries

  • China named among restricted jurisdictions; rule applies irrespective of subsidiary location

  • Policy closes ownership loopholes and affects customers using third-party cloud providers

  • Company cites national-security risks, aiming to protect democratic interests and US leadership

Anthropic said on Friday that it will immediately stop providing its AI services to organisations that are more than 50% owned directly or indirectly by entities based in countries it classifies as adversarial, including China, citing national security risks.

The San Francisco startup said the broadened restriction closes what it called a loophole where foreign parents or subsidiaries could access sensitive technologies and funnel them into military, intelligence or authoritarian uses.

Under the updated terms, the ban applies regardless of where a subsidiary operates. Companies majority-owned by groups in restricted jurisdictions will be barred from using Anthropic’s models even if they are incorporated elsewhere. Anthropic also warned that customers accessing its systems through third-party cloud providers could be affected.

The company framed the step as part of a broader effort to ensure transformative AI advances democratic interests and US leadership in the technology.

National Security Rationale

Anthropic’s statement argued that firms subject to control by authoritarian governments can be legally compelled to share data or assist local intelligence agencies, creating unavoidable national security vulnerabilities.

The company said such access could be used to create applications that support adversarial military or intelligence objectives, or to accelerate rival AI development through techniques that replicate or distil its models.

The decision comes amid intensifying scrutiny in Washington over the potential military applications of advanced AI and wider calls for tighter export controls. US policymakers and agencies have recently tightened rules and in some cases banned certain foreign AI systems. The move also follows high-profile technical releases this year by Chinese-linked projects that unnerved parts of Silicon Valley and prompted debate about how to protect premium AI capabilities.

Commercial Impact & Trade-Offs

An executive briefed on the policy told the Financial Times the change could trim Anthropic’s global revenue by the “low hundreds of millions of dollars”, and acknowledged the company may forgo business that rivals could win. Still, Anthropic framed the sacrifice as necessary to spotlight the risks and press for stronger public-policy measures, including export controls and national infrastructure investments to secure US leadership in AI.

Anthropic, the developer of the Claude family of models, was founded in 2021 by former OpenAI researchers who prioritised safety-first development. Company leadership, including CEO Dario Amodei, has publicly supported tougher controls on the transfer of advanced AI technology to adversarial states.

Anthropic said it will continue pressing governments for strong export controls and national strategies to prevent misuse of frontier AI by authoritarian actors. The company framed its new policy as part of a collective responsibility among “responsible AI” firms to limit how transformative technologies may be repurposed against democratic interests.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×