
Calvin Wankhede / Android Authority
TL;DR
- ChatGPT is getting a new “Trusted Contact” feature.
- It allows adults to add a trusted contact that ChatGPT can alert for support in moments of crisis.
- OpenAI is also using human reviewers to determine whether a conversation hints at safety concerns.
ChatGPT is one of the most popular AI chatbots. Users often rely on ChatGPT for conversations about pretty much everything, including self-harm. In fact, ChatGPT has been involved in many self-harm cases, and OpenAI has even been sued for such occurrences. Now ChatGPT is introducing a new “Trusted Contact” feature to help users in such situations get access to real-world support.
Don’t want to miss the best from Android Authority?


Interested adult users of ChatGPT can add a trusted adult contact (18+ globally or 19+ in South Korea) in their ChatGPT settings. The added contact will also receive an invitation and can choose to accept or decline it.

Once the feature is set up, the user doesn’t need to do anything else. During their conversations, if ChatGPT detects that they may be discussing self-harm, it will let the user know that their trusted contact may be notified.
However, OpenAI isn’t fully relying on automated monitoring systems to figure this out. A team of specially trained human reviewers will also review the conversation. This team is responsible for deciding if the conversation indicates a safety concern, and for taking a final call on whether the trusted contact needs to be notified.
The feature is optional, and OpenAI explains that even when used, the trusted contact will not receive a transcript of users’ chats. This is to ensure privacy. The trusted contact will simply receive a notification encouraging them to check in with the user.

This new feature joins ChatGPT’s existing safeguards for sensitive conversations, including encouraging people to contact helplines and even taking a break from using ChatGPT itself.
Thank you for being part of our community. Read our Comment Policy before posting.
