AI Labelling, Quicker Takedowns: Decoding India’s New Social Media Rules

Understanding What Stays the Same Under India’s New Social Media Compliance Rules

IT Rules Amendment 2026
info_icon
Summary
Summary of this article
  • Key compliance requirements, including grievance officers, privacy policies, and user agreements, remain unchanged under the new rules.

  • Safe harbour protection continues, but only for platforms that follow the new synthetic content regulations.

  • Tighter timelines and new responsibilities increase the compliance burden on social media platforms.

A new set of digital regulations set to come into effect on February 20 are set to radically change the social media experience of Indians, with potential privacy implications. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 provide will require users, and social media platforms, to declare and label all AI-generated content, and also tightens the window for taking down offence content from 36 hours to 3 hours.

The rules have been framed in the wake of widespread accessibility and adoption of powerful AI-based content generation tools, such as Google Gemini, SORA and Grok, whose creations have become virtually indistinguishable from traditional 'real' images and videos. This has led to the creation of realistic artificial images and videos featuring prominent personalities, such as the recent fake video of the 'arrest' of Rashmika Mandanna.

Start-up Outperformers 2026

3 February 2026

Get the latest issue of Outlook Business

amazon

Secondly, scamsters have also started using AI-generated content, including making voice-modification technology, to trick less technologically savvy people into divulging their banking secrets and parting with their money. According to the Ministry of Home Affairs, Indians lost approximately ₹22,845.73 crore to cybercriminals in 2024, marking a steep 206% increase from the ₹7,465.18 crore reported in 2023. 

Quicker Takedowns

Under the new rules, the most significant change for intermediaries is the much shorter time allowed to respond to content-related issues. The timeframe for removing content after receiving a government order under Rule 3(1)(d) has been reduced from 36 hours to just 3 hours, while the deadline for taking down non-consensual intimate images or deepfake pornography under Rule 3(2)(b) has been cut from 24 hours to 2 hours. 

In addition, intermediaries must now acknowledge user grievances within 7 days instead of the earlier 15-day limit, and resolve specific complaints within 36 hours rather than 72 hours. These changes greatly strengthen compliance requirements and increase pressure on platforms to act quickly.

According to Ankit Sahni, Partner, Ajay Sahni & Associates the very short deadlines for removing content, along with rules that hold platforms responsible once they become aware of problems through complaints or internal monitoring, represent a major change in how intermediaries are regulated. 

“The statutory safe-harbour framework remains intact, the amendments effectively move platforms away from a passive hosting role towards a more proactive compliance posture,” he said. 

What is SGI?

The new rules clearly define Synthetically Generated Information (SGI) to include any audio-visual content that is created or altered using algorithms and appears different from real content. 

All AI-generated content must now be prominently labelled so that users can immediately recognize it as synthetic. Under Rule 4(1A), significant social media intermediaries (SSMIs) are now required to ensure that users declare whether the content they upload or share is synthetically generated. Significant Social Media Intermediaries are a special category of large social media platforms defined under India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. 

In addition, intermediaries are now required to embed permanent metadata or unique digital identifiers in such content, allowing it to be traced back to the computer or system used to create it. 

Also the intermediaries are strictly prohibited to allow the user to remove this mandatory labelling from the content. 

Routine and good-faith edits, such as improving sound, adjusting colors, or formatting documents, are not treated as synthetic information, as long as they do not alter the original meaning.

SSMIs must use automated technical tools to verify the accuracy of these declarations. This framework helps ensure that AI-generated or altered content is identified at an early stage and regulated appropriately.

Enhanced User Accountability & Reporting

Along with introducing strict rules that require intermediaries and digital platforms to act quickly, the authorities have also placed greater responsibility on them for monitoring and managing the conduct of their users.

Intermediaries are now required to regularly inform users, at least once every three months, that violating platform rules may result in account suspension or termination. 

Additionally, platforms are obligated to report offenses related to child protection under POCSO and crimes covered under the Bharatiya Nagarik Suraksha Sanhita (BNSS) such as  defamation, sexual offences, unlawful assembly, rioting, and similar acts to the relevant authorities, ensuring stronger legal compliance and user protection. 

The new rules say a platform becomes responsible to act when it becomes aware of a problem, and it can gain this awareness in three ways: by noticing harmful or illegal content on its own, by receiving a report or complaint from a user, or by getting official or reliable information from another source. Once the platform knows about the issue through any of these means, it is expected to take appropriate action.

What Has Not Changed?

Along with the new rules, many existing provisions remain unchanged, and the basic regulatory framework continues to apply. Intermediaries are still required to appoint a Grievance Officer, publish their contact details, and maintain a privacy policy and user agreement. 

The “Safe Harbour” protection under Section 79 of the IT Act also remains in place, but it now applies only if platforms comply with the new rules on synthetic content. 

According to Rohit Kumar, Founding Partner at the public policy firm The Quantum Hub (TQH), the significantly compressed grievance timelines such as the two- to three-hour takedown windows will materially raise compliance burdens and merit close scrutiny, particularly given that non-compliance is linked to the loss of safe harbour protections.

The new rules shift more responsibility for the content on social media platforms from the user to the platform itself, while the ten-day deadline gives digital platforms and intermediaries limited time to create, test and deploy the new labelling and takedown tools.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×