IT Rules Amendment 2026: 3-Hour Takedown Rule, Synthetic Content Defined, New Compliance Norms

India has amended its IT Rules 2021, mandating a 3-hour takedown for illegal AI content and mandatory labelling of AI generated content

IT Rules Amendment 2026
info_icon
Summary
Summary of this article
  • A new 3-hour deadline for removing unlawful AI content replaces the previous 36-hour window

  • "Synthetically generated information" is officially defined, making platforms liable for deepfake violations

  • Mandatory AI labelling and provenance markers must now be embedded in all generated media

The government on Tuesday amended India’s 2021 IT Rules to require social media platforms to remove unlawful content, including AI-generated material, within three hours of receiving a takedown notice, cutting the earlier 36-hour deadline.

The changes, effective February 20, are aimed at curbing illegal and misleading content, with a specific focus on deepfakes and AI impersonation.

Start-up Outperformers 2026

3 February 2026

Get the latest issue of Outlook Business

amazon

Synthetic Information Defined

For the first time, the rules formally define “synthetically generated information” as audio, visual or audiovisual content that is artificially created or altered to appear real, while excluding routine edits, accessibility features, academic or training material, and minor technical corrections.

Such synthetic content is now explicitly treated as “information” under IT law, making intermediaries liable when it violates legal provisions.

Compliance Obligation for Intermediaries

The amendments impose new compliance obligations on platforms, including mandatory labelling of synthetic content through visible disclosures, audio prefixes, and embedded metadata or provenance markers that can trace the generating tool, where technically feasible.

Platforms are barred from allowing these labels or identifiers to be removed or suppressed.

Significant social media intermediaries must also obtain user declarations at the time of upload on whether content is AI-generated and deploy technical checks to verify these claims. Confirmed synthetic content must be clearly disclosed before publication, with failures potentially considered a breach of due diligence.

Curb on Unlawful Synthetic Content

The rules further mandate automated safeguards to prevent the creation or spread of unlawful synthetic content such as child sexual abuse material, non-consensual intimate imagery, impersonation, obscenity, false electronic records, and content linked to explosives or arms.

Users must be warned that violations can lead to account suspension, content removal, identity disclosure to affected parties, and reporting to law enforcement.

Other Details

Other timelines have also been tightened, with grievance acknowledgement periods reduced to seven days and certain removal actions requiring compliance within as little as two hours.

Legal references have been updated to reflect recent criminal law reforms, replacing the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023.

The government clarified that takedowns, including those carried out through automated tools, will not affect intermediaries’ safe-harbour protection under Section 79 of the IT Act if due-diligence norms are met.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×