IT Rules 2026: India’s Creator Economy Shall Pay the Price for User Safety & Privacy

India's 2026 IT Rules mandate a 3-hour takedown for unlawful content and mandatory AI labelling

IT Rules 2026
info_icon
Summary
Summary of this article
  • IT Amendment Rules 2026 mandate a strict three-hour takedown timeline for platforms

  • New rules shift focus to proactive governance to combat deepfakes and misinformation

  • Over 2.5 mn creators face potential monetisation risks due to algorithmic deprioritisation

Two of the most striking rules from the recently notified Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 were: the three-hour takedown timeline given to social media platforms and mandatory disclosure and labeling of the synthetic content.

The government's rationale behind introducing the rules was to safeguard the interests of the user at a time when “synthetic content” is becoming increasingly prominent. However, along with their benefits, the rules may also put a dent in the rising creator economy of the country.

Start-up Outperformers 2026

3 February 2026

Get the latest issue of Outlook Business

amazon

The Trigger

The government’s decision to regulate AI generated content online comes weeks after an obscene trend went viral on social media platform X. Images of multiple users including those of minors were obscenely morphed within the platform using xAI’s bot Grok.

The trend continued unchecked on the platform for nearly three days before the IT Ministry intervened and brought it to a halt.

This instance exposed how India’s regulatory architecture governing impersonation, manipulated content and deceptive material was essentially designed to be reactive and notice-driven, rather than preventive or proactive.

On this, Tushar Kumar, Advocate, Supreme Court of India stated that as per previous framework, intermediaries were obliged to act primarily upon receipt of specific user complaints, court directions, or governmental notices.

He added that synthetic media, deepfakes, and technologically generated impersonation were addressed only tangentially under general categories such as misleading information, privacy violations, or identity fraud.

“The present amendments fundamentally reconfigure this framework by expressly recognising AI-generated and manipulated content as a distinct regulatory category and by imposing affirmative, preventive obligations. In effect, the earlier complaint-based compliance regime has been replaced by a proactive governance model, substantially narrowing the conceptual space of neutral intermediaries,” Kumar said.

Impact on Creator Economy

A Boston Consulting Group (BCG) report published in May 2025, titled “From Content to Commerce: Mapping India’s Creator Economy,” estimated the country’s creator economy to be valued at $20–25 billion.

The report notes that Indian creators currently influence over $350 billion in annual consumer spending, a figure projected to cross $1 trillion by 2030, highlighting their growing role in shaping purchasing decisions and digital commerce.

India is reportedly home to an estimated 2 to 2.5 million active digital creators, defined as individuals with more than 1,000 followers. However, despite this scale, only 8–10% are effectively monetising their content, highlighting significant untapped potential within the ecosystem.

With the introduction of new regulatory rules around digital and synthetic content, industry observers caution that compliance burdens, potential algorithmic deprioritisation and monetisation risks could hamper the growth momentum and put a dent in the sector’s projected expansion trajectory.

On this, Arun Prabhu, Partner & Co- Head, Digital +, TMT, Cyril Amarchand Mangaldas said, “This takedown timeline is fairly aggressive, and one of the most demanding timelines globally. While creators will likely rely on tools made available to them by platforms, the real impact will be increased caution by platforms in allowing publication and monetisation of synthetic content.”

Monetary Impact

To ensure compliance with the takedown timeline and content labelling mandate, both the creators and platforms have to invest further in operations and architecture.

The amendments aim to make AI content visible, traceable, and accountable. While this protects individuals and combats misinformation, it increases operational costs for both platforms and creators, potentially leading to consolidation among smaller AI-tool builders.

Ankit Sahni, Partner, Ajay Sahni & Associates stated that Larger intermediaries might already possess automated detection tools and dedicated compliance teams but smaller platforms could face challenges in deploying technical measures at comparable scale.

“The Rules require reasonable and appropriate measures, but what is considered reasonable may evolve with regulatory interpretation. As a result, resource constrained platforms may need to significantly upgrade moderation systems to remain within the protective scope of safe harbour,” he added.

“While the most onerous obligations, i.e. those that surround identification, labelling and the like, are reserved for Significant Social Media Intermediaries, the short notice and takedown timelines will still mean that mere intermediaries, which term is defined fairly widely, will need to make material investment in platforms and architecture to meet the regulatory obligations," Prabhu said.

Indirect Impact on Creator Economy

The new IT Rules define synthetic content as audio-visual material created or altered algorithmically that appears “indistinguishable” from a natural person or real-world event.

While aimed at curbing misinformation and deepfake misuse, the expansive definition introduces substantial interpretative and enforcement challenges in India’s complex digital ecosystem.

AI-dependent creators, particularly those producing deepfake comedy, AI avatars, cloned voices or hyper-realistic filters, are likely to face immediate impact. Such content may be automatically flagged as “synthetic,” triggering requirements for prominent and persistent disclosures, including metadata labelling.

This, in turn, could affect discoverability, monetisation and audience engagement.

Advocate Kumar noted that in a jurisdiction marked by “linguistic plurality, vibrant political discourse, pervasive satire and uneven digital literacy,” the boundary between deception and legitimate expression is inherently fluid.

He argued that automated moderation systems are “structurally ill-equipped to appreciate contextual nuance across regional and cultural matrices,” potentially leading to excessive takedowns. While the framework seeks to address misinformation and synthetic manipulation, he noted that its breadth carries “an undeniable potential for chilling constitutionally protected speech under Article 19(1)(a).”

Echoing similar concerns, tech lawyer Salman Warris said the rules could increase the risk of algorithmic throttling, reduced reach and potential takedowns, particularly for borderline content such as political satire or celebrity spoofs.

He added that platform recommendation systems may deprioritise AI-labelled material, especially in sensitive categories like news and politics, as users may instinctively avoid such content. This could directly affect watch time, brand safety metrics and overall monetisation prospects for creators.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×