Anthropic CEO Dario Amodei expressed concerns about espionage risks faced by US AI companies, particularly from China during a key debate at the Council on Foreign Relations, TechCrunch reported. The CEO urged the US government to intervene, warning that Chinese agents could potentially access valuable "algorithmic secrets" from leading American AI firms.
Amodei highlighted the financial risks involved and emphasised the need for stricter government regulations to protect patented innovations. He also advocated for tighter export controls on AI chips to prevent potential military misuse and called for greater collaboration between the government and AI leaders to secure advanced facilities.
Amodei warned that China is known for its "large-scale industrial espionage" and that AI firms like Anthropic are likely targets. He pointed out that some "algorithmic secrets" valued at $100 million could be just a few lines of code, making them vulnerable to theft. "I am sure there are folks trying to steal them, and they may be succeeding," Amodei reportedly stated.
In its submission, Anthropic suggested that the federal government should work closely with AI industry leaders, including US intelligence agencies and their partners, to enhance security at frontier AI labs.
Amodei's comments reflect his critical stance on Chinese AI research. He claimed that DeepSeek received "the worst" score on a key bioweapons data safety test conducted by Anthropic. He has also been a strong advocate for stricter US export regulations on AI chips to China.
Amodei’s Warning
Amodei has previously emphasised that mandatory testing should be required for AI companies, including his own, to ensure their technologies are safe for public use before being released.
Speaking at an AI safety symposium organised by the US Departments of Commerce and State in San Francisco, Amodei addressed the issue, stating, "I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it."
His remarks came after the US and UK AI Safety Institutes released their assessment of Anthropic's Claude 3.5 Sonnet model, which evaluated its performance across various domains, including biological capabilities and cybersecurity. Previously, both Anthropic and its competitor OpenAI had agreed to share their AI models with government agencies for evaluation.