Advertisement
X

OpenAI Delays ChatGPT ‘Adult Mode’ Rollout Amid Internal Pushback and Tech Issues

OpenAI postpones ChatGPT's adult mode after safety advisers warn of emotional overdependence and "sexy suicide coach" risks

OpenAI ChatGPT
Summary
  • OpenAI delayed "adult mode" after internal warnings of an AI 'sexy suicide coach'

  • Age-detection tools failed with a 12% error rate, risking minor exposure

  • Psychological experts warn erotic AI triggers compulsive usage and emotional dependency

Advertisement

OpenAI has delayed the rollout of its proposed “adult mode” for ChatGPT following significant internal pushback from safety advisers, even as it continues to explore introducing the feature in the future, The Wall Street Journal reported.

The feature, backed by CEO Sam Altman, would allow sexually explicit text-based conversations for adult users, marking a notable shift from the platform’s current restrictions. However, the proposal has raised concerns around user safety, mental health and the risk of exposure to minors.

Pushback on Adult Mode

The delay reflects growing internal resistance within the company. Members of OpenAI’s advisory council on AI and well-being, including experts in psychology and neuroscience, reportedly cautioned that enabling erotic chatbot interactions could encourage emotional overdependence and compulsive usage.

As per the report, some advisers warned of more extreme scenarios, where users could form deep and potentially unhealthy attachments to AI systems, exposing a broader divide within the company over balancing rapid product expansion with long-term societal risks.

Advertisement

Technical challenges have also complicated the rollout. OpenAI’s age-detection systems, designed to prevent minors from accessing adult content, reportedly showed an error rate of around 12% in internal testing. Given the platform’s large base of younger users, this raises concerns that underage users could gain access to restricted interactions.

At the same time, the company is working to define boundaries that would allow explicit text while continuing to block illegal or harmful content, including non-consensual material or anything involving minors.

Risks Around AI

The report also highlighted broader risks, including increased emotional reliance on chatbots, potential escalation toward more extreme content, and the possibility of weakening real-world social and romantic relationships. In response,

OpenAI has reportedly said it is developing safeguards such as training models to discourage exclusive emotional dependence and encouraging users to maintain real-life connections. It also plans to study long-term impacts through dedicated well-being initiatives.

Advertisement

The debate comes as competition in the AI space intensifies, particularly with companies such as Google and Meta Platforms, and as scrutiny over the societal impact of AI continues to grow. The controversy highlights a broader challenge for the industry: expanding user freedom and engagement while avoiding the safety pitfalls that emerged during the rise of social media platforms.