Start-Up

Game of Shadows: India’s Deepfake Dilemma

A scam that empties savings, destroys reputations, cheats investors and sways votes, deepfakes in India have moved from curiosity to crisis, leaving citizens exposed and regulators scrambling

Game of Shadows: India’s Deepfake Dilemma
info_icon

In June 2025 a gang of AI-armed scamsters relieved a 79-year-old in Bengaluru of ₹35 lakhs. The bait: deepfaked videos of N. R. Narayana Murthy, deployed to lure her onto a bogus trading site. False profit reports followed, along with invented “financial managers” and escalating fees, until both money and fraudsters vanished. Police say similar videos featuring Murthy and Mukesh Ambani conned two more victims out of ₹95 lakhs in late 2024.

In October 2023 Rashmika Mandanna’s face was grafted onto another woman’s body in a pornographic clip that raced across WhatsApp and Telegram before police traced it to a young engineer. In Assam, influencer Archita Phukan found her likeness forged into an online persona that amassed 1.3 million followers and ₹10 lakh in subscriptions before her ex-boyfriend’s arrest. During the 2024 general elections, a video showed Congress MP Manish Tewari making incendiary speeches in Haryanvi, a language he does not speak. It sped through constituencies before fact-checkers doused it; the source remains elusive.

These are not glitches in the matrix but glimpses of a new, fast-evolving menace. When identity itself can be stolen, repurposed and redeployed at the speed of the feed, the harder question is whether the law can catch up—and quickly enough.

Deepak Sharma, a Chandigarh-based political strategist and analyst who has covered elections for several years and across multiple parties, said deepfake is an apt name for a new kind of disruption. However, he pointed out that such manipulations are not entirely new.

“Even earlier, videos were altered—clips spliced, scenes added or removed. It may not have been AI, but digital manipulation has always existed,” he noted.

 The anatomy of a fake

Deepfakes—portmanteau of “deep learning” and “fake”—are hyper-realistic fabrications generated by neural networks that learn the statistical patterns of a person’s face or voice and then synthesise new images, video or audio with eerie fidelity. The same toolchain that powers a harmless face-swap in a film studio powers a smear on a teenager or a heist in a corporate finance office.

According to Prof. V. Krishna Nandivada, Head of Computer Science and Engineering at IIT Madras, these systems are trained on vast collections of publicly available images, video and audio. Generative adversarial networks (GANs) form the backbone: “The generator creates fake images or videos, the discriminator tests them against reality, and both sharpen through this adversarial loop,” he says.

The leap from research code to consumer-grade apps has been swift. One reason deepfakes bite so hard is simple: people are poor at spotting them. In controlled experiments, even when warned and incentivised, viewers struggled to distinguish fakes from authentic footage. That is before a clip is compressed for mobile, shared in a trusted family group and headlined with a caption designed to inflame.

Nidhi Sinha, PhD, Cognitive Psychologist at IIT Hyderabad, explained that humans rely heavily on “System 1” thinking—the fast, intuitive mode of processing—“which makes us naturally bad at detecting synthetic media. Spotting a deepfake requires deliberate effort: questioning what we see, analysing inconsistencies, and challenging our own beliefs.”

Globally, the technology has already moved from the fringe to the front office of cybercrime. In 2019 fraudsters cloned a chief executive’s voice and convinced a subordinate to wire €220,000 to a bogus supplier. By early 2024 a Hong Kong employee, fooled by a deepfaked video conference that appeared to include the firm’s finance boss, authorised transfers totalling around $25m, one of the largest such scams on record.

Sandeep Shukla, a cybersecurity expert, noted that while GenAI tools have made deepfakes easier to produce, earlier advances in speech and vision processing had already enabled highly realistic impersonations. “For most cybercriminals WhatsApp or email spoofing works well enough. They rarely need to bother with sophisticated deepfakes.” Many Indian firms, he said, quietly avoid reporting such cases for fear of reputational damage. Detection tools exist but are imperfect, and even if integrated into phones, would raise new privacy concerns.

India’s exposure

Three structural facts make India unusually vulnerable. First, sheer scale: more than a billion mobile connections and, by 2025, 85% of households expected to own a smartphone. Average monthly data use per subscriber now exceeds 24GB—most of it video. According to the IAMAI–Kantar Internet in India Report 2025, active internet users reached 886 million, an 8% year-on-year rise, and will surpass 900 million by the end of the year.

Second, digital consumption has outrun digital literacy; for many users, what appears on a screen still carries the aura of truth. Third, social platforms act as force multipliers in dozens of languages, where moderation is patchy and fact-checking thin.

“Deepfakes that once needed powerful computers can now be made with simple apps. Ordinary users don’t have the tools to detect them—they see something, and it looks real, so they believe it,” said Anurag Mehra of IIT Bombay.

The results are visible. Indian celebrities have repeatedly found their faces grafted onto pornographic clips that spread across Instagram and WhatsApp before takedowns bite. Financial regulators warn of deepfaked “advice” videos featuring cloned corporate chiefs. During the 2024 general election, synthetic audio and video became a booming cottage industry for political outreach, with millions of AI-generated calls and tailored clips slicing the electorate into micro-audiences.

The effect is not straightforward propaganda, but a fog of plausibility, where the liar’s dividend—“that video is fake”—becomes a universal defence.

Says Deepak Sharma: “Deepfake is an apt name for a new kind of disruption. If one party circulates a deepfake, the opposition doesn’t counter with facts but with another deepfake. It becomes a spiral of dirty tricks.”

What the law covers—and misses

India’s legal toolkit was not designed for synthetic media, yet parts of it bite. The Information Technology Act, 2000 penalises identity theft and cheating by personation using a computer resource; it also criminalises obscene and sexually explicit content, which covers much deepfake pornography. However, Sharma believes that the legal framework is inadequate. “ The IT Act of 2000 is at least five years behind technological reality. Authorities—whether the Election Commission or the government—are always playing catch-up. It’s like scams: by the time the police crack one, a new one had already started.” He likens the fight against deepfakes to the battle against drugs: “First, you educate people—show them how it destroys lives. Second, you punish the sellers. With deepfakes, too, you must combine awareness with strict regulation.”

The Bharatiya Nyaya Sanhita covers offences akin to defamation and public mischief that can attach to faked speeches or doctored riots. The Digital Personal Data Protection Act, 2023 penalises the unauthorised processing of personal data, including faces and voices. Is that enough?

Tech lawyer Salman Waris notes that India still lacks a dedicated deepfake statute. Instead, sections of the IT Act, IPC and DPDP Act cover identity theft, obscenity, defamation, and privacy breaches. “Enforcement is weak without a legal definition of deepfakes, compounded by technical hurdles, problems proving intent, evidentiary gaps, and jurisdictional limits once content crosses borders,” he observes.

 Courts have improvised. In 2023 the Delhi High Court granted actor Anil Kapoor interim protection over his name, image, voice and persona, explicitly targeting AI-driven misuse. It was a personality-rights workaround rather than a deepfake law, but signalled judicial concern.

Yet gaps remain. None of these instruments clearly recognises a statutory right to one’s likeness and voice. Victims often must stitch together complaints across obscenity, defamation, cheating and data protection—each with its own delays—while the content metastasises online. Political manipulation, revenge porn and mass disinformation are addressed only indirectly.

Globally, meanwhile, the EU’s AI Act, the UK’s Online Safety Act, and the US Deepfake Accountability Act have introduced provisions on labelling, watermarking, and platform liability. Waris suggested that until India enacts a dedicated Deepfake Act, amendments to the IT Act and DPDP could plug immediate gaps.

Lessons from Denmark

Denmark’s new law offers a glimpse of what sharper guardrails might look like. By declaring that citizens own their face and voice as protected subject-matter, it created a watertight framework for takedowns and damages without forcing victims to prove obscenity or disorder. Consent sits at the core; manipulators must now show they had it. Europe’s wider AI regime already requires labelling and disclosure, but Denmark went further by vesting rights in the individual rather than regulating only the tool.

India could borrow the principle and localise the mechanics. A doctrine tied only to celebrity rights will not suffice in a country of 1.4bn, where teachers, nurses and shopkeepers are as vulnerable as actors or billionaires. Identity deserves protection not because it can be monetised but because it is identity. Until lawmakers accept that, deepfakes will keep metastasising online—and Indians will keep learning that in the digital bazaar, even the self can be stolen.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×