Artificial Intelligence

Deloitte Still Bets on AI with Anthropic Deal After 'Inaccurate' Report Refund Saga

Partnership includes Claude Centre of Excellence, 15,000-person certification programme and compliance-focused offerings for regulated sectors

Deloitte South Asia
info_icon
Summary
Summary of this article
  • Deloitte expands Anthropic partnership to deploy Claude across 470,000 enterprise professionals

  • Create Claude Centre of Excellence to scale safe, compliance-minded enterprise AI

  • Joint certification trains 15,000 Deloitte staff in Trustworthy AI governance and deployments

  • Deloitte refunds Australia contract after AI errors, underscoring governance and accuracy risks

Professional services giant Deloitte and AI start-up Anthropic said they have expanded their strategic alliance to roll out Anthropic’s Claude across Deloitte’s global network and co-develop industry-specific, compliance-minded AI solutions.

Under the deal, Claude will be made available to more than 470,000 Deloitte people and the two firms will establish a Claude Centre of Excellence to accelerate safe, scalable deployments.

The announcement arrives as Deloitte continues to lean into AI across advisory, tax and audit services. This also comes after Deloitte refunded an Australian government contract after a report contained AI-generated inaccuracies.

Deloitte has since issued a corrected report and will repay the final contract instalment, a fact the firm has acknowledged publicly.

Executives for both companies framed the partnership as a test of bringing powerful generative models into mission-critical, compliance-heavy workflows. Anthropic emphasizes Claude’s safety-first design, while Deloitte brings domain controls, governance frameworks and global scale to implementation and oversight.

Partnership Details

The Centre of Excellence will shelter trained specialists who design implementation frameworks, share operational best practices, and provide ongoing technical support, with the explicit aim of moving AI pilots into production at scale.

As part of the partnership, Deloitte and Anthropic will also create a formal certification programme to train and certify 15,000 Deloitte professionals to implement and govern Claude-driven solutions across the firm.

Deloitte said the collaboration will focus on regulated industries, including financial services, healthcare, life sciences and public services, and will fuse Claude’s safety-oriented architecture with Deloitte’s Trustworthy AI™ framework. The objective is to build features that increase transparency, support compliance, and enable enterprises to deploy generative-AI tools with stronger guardrails.

Claude Updates

The pair will co-create customised Claude “personas” tailored for different roles across the firm, from accountants to software developers, so teams can access assistants tuned to their workflows and regulatory needs.

Deloitte plans to use the CoE and certified practitioner pool to accelerate client rollouts and to underpin its own internal AI transformation.

Anthropic described the expanded relationship as its largest enterprise deployment to date. The company has been scaling internationally while launching newer models (including Claude Sonnet 4.5) and expanding investor support; financial terms of the Deloitte agreement were not disclosed.

Deloitte’s AI Generated Report

Deloitte will issue a partial refund to Australia’s federal government after admitting that generative artificial intelligence was used in producing a $440,000 report that contained multiple factual errors and citations to non-existent sources.

The Department of Employment and Workplace Relations (DEWR) confirmed that Deloitte would repay the final instalment under the contract, which will be made public once the transaction is finalised.

The report, commissioned in December 2024, reviewed the government’s targeted compliance framework and its IT system that automates welfare penalties for jobseekers who fail to meet mutual obligations.

Published on July 4, it identified serious flaws, including weak “traceability” between policy rules and legislation, and IT systems “driven by punitive assumptions of participant non-compliance.”

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×