Meity Notifies IT Rules to Curb Deepfakes and AI-Generated Content

India Tightens Digital Rules to Tackle AI Misuse and Online Misinformation

Meity Working With Stakeholders on Mobile Security; Assures to Address Industry's Legitimate Concerns
info_icon
Summary
Summary of this article
  • MeitY has notified new IT Rules mandating strict regulation, labelling, and faster removal of deepfake and AI-generated content.

  • The amendments require platforms to comply with tighter timelines, enhance user accountability, and maintain safe harbour protection under Section 79.

  • The move aims to curb rising AI-related scams and misinformation while strengthening India’s digital governance framework.

The Ministry of Electronics and Information Technology (MeitY) on Tuesday officially notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at regulating deepfakes and other forms of “synthetically generated information” (SGI).

These rules will come into force on February 20, 2026, giving digital platforms and intermediaries a limited transition period to align their policies and system with the new compliance.

Start-up Outperformers 2026

3 February 2026

Get the latest issue of Outlook Business

amazon

The amended have introduced stringent rules primarily targeting AI-generated content (deepfakes) and significantly accelerates response timelines for intermediaries.

The notified IT rules place a clear focus on “Synthetically Generated Information” (SGI), including AI-generated and deepfake content. The rules define SGI as audio, visual, or audio-visual material that is artificially or algorithmically created or modified to appear real, portraying individuals or events in a manner that is indistinguishable from reality.

Also, in the new changes the compliance timeline for the digital platforms has drastically reduced. Intermediaries must now remove content within 3 hours (previously 36 hours) of receiving a government or authorized order.

Under the new mandatory transparency and technical standards, all synthetically generated content must be “prominently labelled” to ensure it is immediately and easily identifiable as AI-generated.

On enforcement and user accountability, the rules require intermediaries to notify users at least once every three months that any violations may result in immediate account suspension, termination, or removal of content.

Under Section 79 compliance, the rules clarify that intermediaries will not lose their “safe harbour” protection for removing or disabling access to information, including synthetically generated content, when such actions are taken in accordance with the prescribed guidelines. 

Such compliance will not be treated as a violation of the conditions under Section 79 of the IT Act.

AI-related scams in India have surged in recent years, leading to significant financial losses and instances of physical and emotional abuse, affecting people across age groups and genders. 

In response, the IT Ministry, through its latest amendment notification, has introduced stricter measures for social media platforms and clearly defined what constitutes deepfake and synthetically generated content.

The Information Technology Act was enacted in 2000 as India’s first comprehensive law to address cybercrime, digital fraud, and electronic governance. Since then, the Act has undergone significant amendments, notably in 2008 and later through updated rules and regulations in response to the rise of social media and artificial intelligence.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×