OpenAI says it has improved ChatGPT’s ability to recognize and respond to users expressing thoughts of suicide or self-harm.
In a company blog post, OpenAI said the latest version of ChatGPT was trained with input from mental health experts to better detect warning signs, de-escalate conversations and guide users toward professional help.
The updated ChatGPT model reduced undesired or unsafe responses in self-harm and suicide-related conversations by about 65%, according to OpenAI’s internal figures. In challenging test cases reviewed by independent clinicians, experts found a 52% drop in problematic answers compared with the prior GPT-4o model.
RELATED STORY | Study says AI chatbots need to fix suicide response, as family sues over ChatGPT role in boy's death
OpenAI said the improvements are part of its long-term goal to ensure ChatGPT responds "safely and empathetically” when users show signs of suicidal ideation or distress.
The company emphasized that such conversations are rare, about 0.15% of users in a given week show explicit indicators of suicidal planning or intent.
The analysis comes months after a study by the American Psychiatric Association found major AI companies should implement “further refinement” to address mental health concerns.
If you or someone you know needs help, call, text, or chat 988 for the Suicide and Crisis Prevention Lifeline.