The government is providing grace period for platforms to integrate labeling technologies
Social media platforms must now prominently label all synthetically generated content
The government has recently notified the IT Amendment Rules, 2026)
The government is providing grace period for platforms to integrate labeling technologies
Social media platforms must now prominently label all synthetically generated content
The government has recently notified the IT Amendment Rules, 2026)
The government will give social media intermediaries time to integrate AI labelling technologies into their platforms before strictly enforcing the new regulations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, according to a report by The Economic Times.
The updated rules require social media platforms to prominently label “synthetically generated” content, including AI-generated images and videos.
Officials told the publication that companies will need to ensure their detection and labelling technologies function effectively and must be prepared to demonstrate their reliability to the government whenever required.
As per the report, many major technology companies are already developing such systems globally and deploying them in India, but the tools will need to be adjusted to align with the new regulatory framework aimed at tackling deepfakes and harmful AI-generated content.
The amendment was notified by the Ministry of Electronics and Information Technology on February 10, 2026, updating the earlier IT Rules 2021. The new rules require both users and social media platforms to label AI-generated content and significantly tighten the timeline for removing unlawful content, from the earlier 24-36 hours to just two to three hours.
The regulations came into effect on February 20.
Under the updated framework, social media platforms with more than five million users must obtain a declaration from users when content is AI-generated and carry out technical verification before publishing it.
The government has also directed large platforms to deploy “reasonable and appropriate technical measures” to prevent unlawful synthetically generated information (SGI) and ensure proper labelling, provenance tracking and identifiers for permissible AI-generated content.
According to MeitY, the provisions are aimed at countering deepfakes, misinformation and other forms of unlawful content that could mislead users, harm individuals, violate privacy or threaten national integrity. The ministry emphasised that users should be able to clearly identify whether the content they are viewing is authentic or artificially generated.
The amendments also introduce additional compliance obligations for intermediaries. The platforms shall also remind users of their terms and conditions more frequently. Notifications about platform rules and user responsibilities will now be required at least once every three months instead of once a year.
Platforms must also explicitly warn users that sharing harmful deepfakes or other illegal AI-generated content could result in legal consequences. These include disclosure of the user’s identity to law enforcement agencies, immediate removal of the content, and the suspension or termination of user accounts.