ChatGPT Now Allows You to Add a ‘Trusted Contact’ for Security. Here’s the Way

Amid a wave of lawsuits alleging that interactions with ChatGPT contributed to several deaths — including suicides and drug overdoses — OpenAI earlier this month introduced an optional safety feature called Trusted Contact. The tool allows older ChatGPT users to designate a friend or family member to be notified if conversations with the chatbot involve potential self-harm or suicide.
OpenAI said that if ChatGPT’s automated monitoring system detects that someone “may have discussed self-harm in a way that raises serious safety concerns,” a small team will review the situation and notify the contact if intervention is appropriate. A trusted contact receives an invitation ahead of time that explains the role and can choose to decline it.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’ copyrights in training and using its AI programs.)
The announcement comes as AI chatbots have been linked to a number of incidents involving self-harm and death, prompting a growing number of lawsuits accusing developers of failing to prevent those outcomes. In another case in California, the parents of a 16-year-old child claimed that ChatGPT acted as a “suicide coach” for their son, saying that the teenager discussed ways to commit suicide with an AI model on several occasions and that the chatbot offered to help him write a suicide note.
In another case, the family of a recent Texas A&M graduate sued OpenAI, alleging that the AI chatbot encouraged their son’s suicide after he developed a deep and troubled relationship with the chatbot. A misdemeanor lawsuit filed this week accuses the company’s chatbot of counseling a 19-year-old about drug use for 18 months until he died of a drug overdose in 2025 after mixing Xanax and the highly controlled drug kratom.
As large linguistic models mimic human speech through pattern recognition, many people form an emotional attachment to them, treating them as confidants or romantic partners. LLMs are also designed to follow a person’s lead and maintain a relationship, which may exacerbate mental health risks, especially for vulnerable users.
OpenAI said last October that its research found that more than 1 million ChatGPT users a week were sending messages “with clear indications of possible suicide planning or intent.” Many lessons they found that popular chatbots are similar ChatGPT, Claude again Gemini can offer dangerous — or unhelpful — advice to those in trouble.
The new select contact feature follows OpenAI’s release of parental controls that allow parents and guardians to receive alerts if there are signs of danger involving their young children.
ChatGPT communication and security feature
According to OpenAI, if ChatGPT’s automated monitoring system detects that a user is discussing self-harm in a way that could cause a serious security issue, ChatGPT will notify the user that it may notify a trusted contact. The app will encourage the user to reach out to trusted contacts and provide conversation starters.
At that time, a “small team of specially trained people” will review the situation. If it is determined to be a critical security situation, ChatGPT will notify the contact via email, text message or in-app notification. OpenAI did not specify how many people are on the review team or whether it includes qualified medical professionals. The company said the team has the capacity to meet the high demand for potential interventions.
It is not clear what key words would flag dangerous conversations or how OpenAI’s review team would interpret a problem as a valid contact notification. Some Internet commentators question whether the new feature is a way for OpenAI to avoid legal liability and shift responsibility to selected contacts of users. Others realize that it can make a bad situation worse if a “trusted contact” is a source of danger or abuse.
There are also concerns about privacy and use, particularly about sharing sensitive mental health information. According to OpenAI, a message to a trusted contact will provide a general reason for concern and will not share chat details or documents. OpenAI provides guidance on how trusted contacts can respond to a warning notification, including asking specific questions if they are concerned someone is thinking about killing or harming themselves and how to get help.
Notifications to a Trusted Contact do not contain information of security concern.
OpenAI provides an example of what a message to a trusted contact might look like:
We recently received an interview from [name] where they discuss suicide in a way that may indicate a serious concern for safety. Because you are on his trusted contact list, we are sharing this so you can reach them.
OpenAI said all notifications will be reviewed by a human team within 1 hour before they are sent and that notifications “may not always reflect the individual’s experience.”
How to add a trusted contact
To add a trusted contact, ChatGPT users can go to him Settings > Trusted contact then add one adult (18 or older). You can only have one trusted contact. That person will then receive an invitation from ChatGPT and must accept it within one week. If they do not respond or refuse to be your contact, you can select a different contact.
ChatGPT customers can change or remove trusted contacts in their app settings. People can opt out of being trusted contacts at any time.
Even though adding a trusted contact is an option, ChatGPT users who haven’t logged in may see registration prompts if they ask or discuss topics related to severe emotional distress or self-harm more than once, according to OpenAI. If the chatbot’s automated system identifies patterns in every conversation, it may suggest to the user that it would benefit from choosing a reliable contact.
Details of this feature are described on the OpenAI page. OpenAI told CNET that the feature is available to all adult customers worldwide and will be available to everyone within a few weeks.
If you feel that you or someone you know is in immediate danger, call 911 (or your state’s local emergency line) or go to the emergency room for immediate help. Explain that it is a psychiatric emergency and ask for someone trained in these situations. If you struggle with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.



