India Warns X of ‘Safe Harbour’ Loss after Grok-Generated Obscene Images Spread Online

MeitY issues a 72-hour notice to X (formerly Twitter) over obscene images and deepfakes generated by Grok AI

Twitter
Tesla CEO, Elon Musk Photo: Twitter
info_icon
Summary
Summary of this article
  • MeitY issued 72-hour ultimatum to X to remove obscene AI Images

  • Notice targets Grok’s “Spicy Mode,” which allegedly lacks the safety filters

  • X must submit a compliance report, detailing technical fixes

The Centre has warned X, formerly Twitter, over AI-generated obscene content, saying the platform risks losing its “safe harbour” status if it does not remove flagged images and videos created using xAI’s Grok and submit an auditable compliance report, ET reported.

The Ministry of Electronics and Information Technology (MeitY) had earlier cautioned that failure to comply within 72 hours could lead to withdrawal of X’s protection under Section 79 of the Information Technology Act. The provision shields intermediaries from liability for user-generated content, provided they adhere to government rules and act promptly on takedown directions.

Tax The Rich

1 January 2026

Get the latest issue of Outlook Business

amazon

Grok’s Image Generation Feature

The notice follows a surge of complaints that Grok’s image-generation features, particularly its so-called “Spicy Mode,” have been misused to create non-consensual, sexualised deepfakes of public figures and private individuals. Regulators have said that because Grok-generated content can surface directly in user profiles and timelines, abusive images have at times spread widely before moderation systems can intervene. MeitY has reportedly sought the removal of flagged content, suspension of accounts involved in mass dissemination, and the submission of verifiable moderation and enforcement logs.

An xAI employee has acknowledged the issue, saying engineering teams are working to introduce tighter guardrails. However, critics and regulators argue that the company has yet to present a clear, auditable timetable for fixes.

MeitY has also questioned whether X’s India-based compliance officers have the authority and oversight mandated under Indian rules, reviving a familiar point of contention from earlier regulatory disputes. Beyond faster takedowns, regulators are now pressing for design-level changes to Grok itself, including disabling or sharply restricting NSFW modes, preventing the generation of realistic likenesses without consent, and maintaining independently verifiable moderation records.

X’s Future in India

This episode raises broader legal and policy stakes. If MeitY strips X of safe-harbour status, the platform could face direct liability for user posts in India, as well as stricter obligations on moderation and data disclosures. That would set a precedent for how governments treat generative-AI features on social platforms and could encourage similar regulatory moves abroad. Indeed, regulators in Malaysia and parts of Europe are already probing Grok outputs and urging platforms to tighten safeguards.

For platforms and AI-product teams the trade-offs are stark: throttle or redesign features that many users value, or accept tighter regulation and increased legal risk. Regulators appear focused on demonstrable, auditable fixes rather than promissory statements, signalling a shift from reactive content takedowns toward scrutiny of product design and model-level safety that foreseeably enables harm.

Published At:
SUBSCRIBE
Tags

Click/Scan to Subscribe

qr-code

Advertisement

Advertisement

Advertisement

Advertisement

×