Dear Reader,
Artificial intelligence is no longer just about breakthroughs in labs or pumping billions of dollars into data centres — it’s in our hospitals, courtrooms, classrooms, and on the battlefield. At Outlook Business, we believe that India needs a sharp, nuanced, and people-first lens on this transformation.
The Inference is our attempt to make sense of a world being rewritten by AI. In this newsletter, we bring you frontline narratives, boardroom insights, and data you can trust. Whether you’re an investor, founder, policymaker, or just curious — this is where the signal cuts through the noise.
In this edition of the newsletter:
Deloitte's AI snafu creates chaos in consulting
Box office’s battle for survival
Can AI lift India’s GDP growth?
All that glitters ain’t gold
Humans in the Loop
Deloitte's AI snafu creates chaos in consulting
As Shreya got ready to sit for Diwali puja at her apartment in a tony Gurgaon neighbourhood, her phone beeped with a series of messages from her boss. A US pharma major had called an urgent meeting on strategy. She, being a mid-level analyst at a management consulting firm, was expected to listen in to the call, transcribe it and hash out a dossier based on it.
“Just use AI and get done with it,” Shreya’s husband, who works in the HR department of a consumer giant, said. He himself had been using co-pilot for a lot of mundane tasks.
“I don’t want to do a Deloitte. My promotion is due,” she retorted.
When you are leading a large organisation, and billions of dollars and the company’s future ride on your choice, you hire a consultant. Shreya knew doing a ‘ChatGPT’ on the task was not what was expected of her.
The consulting industry is built on credibility. But a recent incident involving one of the biggest players in the industry, Deloitte, has brought unwanted focus on the use of AI inside consulting firms. The firm was forced to refund a part of its 440,000-dollar fee to the Australian government after errors were found in a report created using AI. The document contained fabricated citations and a misattributed quote. Deloitte later admitted to using OpenAI’s GPT-4o during early drafting.
“In discussions with colleagues, we’ve been talking about the Deloitte incident quite a bit. Everyone has become extra cautious while using ChatGPT, even for basic corrections,” said a Delhi-based assistant manager at one of the big four firms.
The case has sparked a debate within consulting offices globally. Employees say that while no formal directive has been issued in India, there is a visible change in approach. “We have internal checks for AI-generated data in reports. Without an okay from the risk team, we can’t publish anything,” the Delhi-based assistant manager said. Her firm relies on people for core analysis, and the use of general AI models is limited to visual conceptualisations.
India’s consulting market is expected to touch $24 billion in 2025. AI is already part of the workflow, but most large firms prefer internal systems. Bain uses Sage, an AI copilot trained on proprietary data, while McKinsey’s Lilli now assists over 70% of its workforce. Issues like those in the Deloitte report typically arise when general-use models are involved, not controlled internal tools.
A senior consultant with two decades of independent practice said the controversy has unnecessarily put the spotlight on an entire industry. “Clients have started asking if AI was used in reports. That’s understandable after all the noise,” he said. In his view, AI misuse is neither widespread nor tolerated. To maintain trust, he has internally advised junior researchers to be extra cautious while using generative AI tools for any purpose.
The fallout has triggered a wave of introspection. Many firms are quietly reviewing how generative tools fit into their workflow. The aim is not to curtail the adoption of technology but to ensure human oversight.
The CEO of a Gurugram-based hospitality consultancy added that the Deloitte case has made firms tighten quality control. His company, too, has strengthened its review systems. “We already had multiple layers of checks to prevent AI hallucinations from seeping into reports. Now we are actively asking junior researchers to double-check every figure,” he said.
Consultants admit that personal AI use cannot be fully stopped, but awareness, fair-use policies, and layered reviews are helping draw boundaries. The spotlight on consulting may fade, but the heightened oversight and caution around AI are likely to stay.
From the Trenches
Box office’s battle for survival
It feels like the wild west again for original content creators. Last month, Bollywood actors Abhishek Bachchan and Aishwarya Rai Bachchan sued YouTube and its parent company Google over AI-generated videos using their likeness without consent. The case challenges YouTube’s policy of allowing users to upload AI videos generated from copyrighted material and the risk of those videos being used for training other AI models.
Across the world, similar lawsuits are piling up as celebrities, creators and media companies try to reclaim control over how their work feeds the machines. For instance, Warner Bros Discovery recently sued popular AI image generation platform Midjourney, alleging the latter stole its content to generate images of iconic characters like Superman, Batman and Wonder Woman.
Sunil Nair, a veteran of the media industry who has worked with entertainment majors Balaji and Star TV, says: “For years, universal video models have scraped the internet without paying or attributing”.
But he isn’t sitting still while AI raids the intellectual property that supports millions of livelihoods.
Clairva, a startup co-founded by Nair, is trying to reset the table. The company’s software ingests licensed video from creators, studios, and libraries into a controlled vault. As usage by AI is logged, payment flows back to the IP owner. The pitch is simple: make it easy and legal to use original clips and reduce unlicensed scraping of content.
Nair estimates claims tied to training data at hundreds of billions of dollars globally. As per a BCG report, the Indian creator economy generated direct revenues of $20–25 billion, which is slated to grow fivefold to $100–125 billion by the end of the decade. With indiscriminate scraping and unlicensed AI training, that growth could quickly come under threat.
The fight is not just legal, it is economic and cultural. If training data is licensed and provenance-rich, creators get paid and their work remains authentic. If not, scraping will ultimately kill the golden goose of original content that trains AI.
Numbers Speak
Can AI lift India’s GDP growth?

A PwC–Google study titled ‘AI works for governments’ estimates that AI adoption in the public sector could lift emerging-market GDPs by an additional 6% to 7% over a decade once adoption becomes widespread. In the report, widespread adoption is defined as the point at which roughly half of all businesses use generative AI.
But India’s cumulative GDP uplift is projected at just 3.8%, the lowest among major Asia-Pacific economies.
The reason lies in readiness. Countries like Malaysia (7%), Philippines (5.9%), and Thailand (5.1%) are expected to benefit more as they move faster on cloud infrastructure, public-sector data frameworks, and AI policy execution. India’s AI use in government remains patchy, largely limited to pilots in tax analytics, agriculture, and citizen grievance systems.
PwC’s model assumes productivity gains depend on “institutional capacity and political commitment.” India’s fragmented digital readiness across states slows the curve. Even as India leads in private-sector AI innovation, public-sector adoption remains cautious.
If implemented well, AI could reduce leakages, improve targeting, and free billions in public spending. For now, the AI dividend in governance remains a slow climb.
Words of Caution
All that glitters ain’t gold
Bain & Company recently warned investors to dig deeper before betting on AI-native startups. In one case, a private equity firm eyeing a medical AI company built its own prototype during due diligence, and did it in just two weeks. The result? The prototype outperformed the target’s product. The acquirer walked away.
It’s a cautionary reminder that in the AI gold rush, some of the “innovation” may be less defensible than it looks. Tools can be rebuilt faster and cheaper than AI startup valuations suggest. For investors, due diligence should not be limited to financials and forecasts. They will have to test if the underlying technology really stands out. In AI, speed cuts both ways, what impresses today may be obsolete by next quarter.
Best of our AI coverage
Dear Sam Altman, Indian Start-Ups Have A Wishlist For You (Read)
India’s Healthcare AI Start-ups Grapple with a Broken Data Ecosystem (Read)
As AI Anxiety Grips Top MBA Campuses, the ‘McKinsey, BCG, Bain’ Dream Flickers (Read)
AI Start-Ups Ride a Wave of ‘Curiosity Revenue’, VCs Rethink What It’s Worth (Read)
What Exactly Is an ‘AI Start-Up’ — and Does India Have 5,000 of Them? (Read)






