OpenAI allows users to designate a "reliable contact" so that the company can warn when detecting signs of self-harm in conversations with ChatGPT.
This move appears in the context of more and more people using ChatGPT as a "digital therapist" to share psychological issues.
Under the new mechanism, users aged 18 and over can add a trusted contact in the ChatGPT settings. The appointed person will accept the invitation and must accept it within a week. If there is no response, users can switch to another contact.
OpenAI said that this process is not completely automatic. A team of specially trained personnel will review each case before sending a warning. If assessed as high risk, the company can send emails, messages or notifications in the application to trusted contacts.
The new feature shows the increasing pressure on artificial intelligence platforms to ensure user safety, especially when many people are viewing chatbots as a place to share personal psychological issues.
However, balancing between supporting users in crisis and protecting privacy is still a big problem for OpenAI as well as the technology industry in general.