Will ChatGPT Notify a Trusted Contact If You’re in Distress? Here’s What We Know

Outlook Business Desk

OpenAI New Feature

OpenAI is developing a safety tool that lets ChatGPT users add a trusted contact who can be alerted if the system detects signs of a potential mental health crisis, highlighting a move towards stronger user protection measures.

freepik

How It Works?

The feature will let adult users select a trusted contact, such as a friend or family member, who may receive alerts if ChatGPT detects behaviour indicating the user could require emotional or psychological support during interactions.

Trigger Signals Detailed

The system may track signals such as expressions of distress, harmful intent or behavioural patterns suggesting emotional instability, but OpenAI has not yet clarified the exact thresholds or specific conditions that would trigger these alerts.

freepik

Why It Matters?

This development comes as, according to reports, OpenAI faces growing scrutiny over how its chatbot engages with vulnerable users, particularly as more individuals rely on AI tools for personal conversations and emotional support in private and sensitive situations.

Reports and Lawsuits

The update comes after reports and lawsuits claimed chatbot interactions made mental health issues worse, including cases of delusions, self-harm and social withdrawal linked to long and deeply personal conversations with AI tools.

What Probes Found?

Meanwhile, investigations found that long use of AI tools in some cases may reinforce harmful beliefs, raising concerns about how these systems respond during emotionally sensitive situations or when users are in psychologically vulnerable states.

OpenAI Response

However, OpenAI said it is working with internal expert teams focused on well-being and healthcare to improve how its models handle sensitive situations, aiming to make responses safer and more responsible during critical interactions.

freepik

Privacy & Adoption

A key challenge is balancing privacy with safety, as many users value private conversations on AI, while the feature also depends on users choosing to opt in and add a trusted contact as OpenAI works to improve distress detection.

pexels

OpenAI Revamps ChatGPT Plans to Challenge Anthropic in Coding Race: What’s Changing?

Read More