OpenAI’s Pentagon deal sparked a #CancelChatGPT trend, driving 2.5 million users to pledge boycotts
Claude overtook ChatGPT as the top AI app on the US App Store amid the backlash
Users cite concerns over mass surveillance, political donations and alleged ties to ICE's resume-screening tools
Anthropic previously rejected the same contract, claiming the Pentagon demanded removal of safety guardrails
OpenAI recently signed an agreement with the US Department of Defense under which the AI start-up shall deploy its models within the department’s classified network. Shortly after the deal was announced, several users of OpenAI’s chatbot ChatGPT began cancelling their subscriptions and the hashtag #CancelChatGPT started trending online.
Many of these users reportedly shifted to OpenAI’s rival start-up Anthropic. Amid the trend, Anthropic’s AI model Claude overtook ChatGPT to become the top AI app on Apple’s US App Store.
Notably, before OpenAI’s signing the same Pentagon deal was rejected by Anthropic, citing mass surveillance and security risks. Anthropic CEO Dario Amodei while rejecting the agreement had said, “we cannot in good conscience accede to their request.”
Why Users Want to Cancel ChatGPT
According to data from QuitGPT, a website advocating a boycott of OpenAI’s models, around 2.5 million individuals have pledged to unsubscribe from the company’s services.
They argue that the deal could pose risks to user privacy and security, as the technology might potentially be used in warfare or surveillance-related activities.
The website headline reads, “CHATGPT TAKES TRUMP'S KILLER ROBOT DEAL. IT'S TIME TO QUIT.”
Beyond the defense agreement, the site lists additional reasons for its distrust of the company. It alleges that OpenAI president Greg Brockman and his wife donated $25 million to MAGA Inc in 2025, while OpenAI CEO Sam Altman reportedly contributed $1 million to Donald Trump’s 2025 inaugural fund. According to the website, these contributions were significantly larger than those made by other major AI companies.
The website also claims that a resume-screening tool used by the US Immigration and Customs Enforcement (ICE) is powered by OpenAI’s GPT‑4. It further alleges that the company is spending $50 million to oppose state-level AI regulation in the United States.
In addition to user concerns, some OpenAI employees have also expressed disappointment over the agreement. In a post on X, OpenAI research scientist Aiden McLaughlin wrote, “I personally don’t think this deal was worth it.” He added that the volume and depth of internal discussions around the agreement had been overwhelming.
Pain Point of OpenAI-Pentagon Deal
In its announcement post, OpenAI shared an excerpt of the contract that read, “The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.”
The users suspect that the phrase “all lawful purposes” can be interpreted differently to use the AI models for mass surveillance and autonomous weapons.
However, while announcing the deal with Pentagon, OpenAI had clarified that the company’s models will not be used for such purposes.
It stated that the agreement includes more safeguards than any previous arrangement for classified AI deployments, including Anthropic’s.
The company outlined three primary red lines: no use of OpenAI technology for mass domestic surveillance; no use to direct autonomous weapons systems; and no use for high-stakes automated decision-making, such as social credit systems.
“In our agreement, we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via the cloud, cleared OpenAI personnel remain in the loop, and we have strong contractual protections. This is in addition to existing protections under US law,” the announcement blog stated.
The clarification evidently did not convince the users who in return decided to boycott the company’s models.
OpenAI Accepted What Anthropic Rejected
The OpenAI-Pentagon deal was signed just a few days after a public standoff between Anthropic and the US Department of Defense.
Anthropic CEO had released a statement revealing details of the company’s discussions with the US Department of War. Amodei stated that the Pentagon has threatened to remove the company from the government contracts and designate it a “supply chain risk” label if they don’t remove the safeguards applied to Claude.
A supply chain risk designation is a label reserved for US adversaries and has never been applied to an American company before.
He added that two use cases have never been included in Anthropic’s contracts with the Department of War. First is mass domestic surveillance, as the company believes it is incompatible with democratic values and presents serious, novel risks to our fundamental liberties. Second is fully autonomous weapons. Amodei remarked that fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for national defense. But today, the frontier AI systems are simply not reliable enough to power fully autonomous weapons.
“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” Amodei said, adding that the company has offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.

























