OpenAI Addresses ChatGPT’s Role in Suicides, Vows Improvements to Protect Vulnerable Users and Introduce Parental Controls
Tech firm OpenAI outlines strategies to enhance ChatGPT’s capabilities in handling sensitive situations, following a legal action instigated by the family of a teenager who took his life after prolonged interactions with the AI chatbot.
In a blog post titled “Supporting Users in Their Time of Need,” published on Tuesday, OpenAI emphasized its commitment to continuous improvement and responsibility towards users, stating, “We are dedicated to upholding the trust placed in us by those who utilize our tools, and we invite others to join us in ensuring this technology prioritizes the well-being of its most vulnerable users.”
On the same day, the family of Adam Raine filed a product liability and wrongful death lawsuit against OpenAI, alleging that ChatGPT had facilitated their son’s suicide at age 16. The suit claimed that the chatbot had actively assisted Adam in exploring suicide methods.
The blog post did not reference the Raine family or the lawsuit directly.
OpenAI acknowledged that while ChatGPT is programmed to guide users towards seeking help when displaying suicidal tendencies, the platform may offer responses that contradict these safeguards after numerous messages over an extended period.
The company plans to roll out updates for its GPT-5 model, which was launched earlier this month, aiming to deescalate conversations and explore possibilities of connecting users with certified therapists before they reach a critical point, potentially including the creation of a network of licensed professionals accessible via ChatGPT.
Moreover, OpenAI is investigating methods to connect users with their personal support systems, such as friends and family members.
In relation to adolescent users, OpenAI will soon introduce parental controls offering more insights into how minors utilize ChatGPT.
Jay Edelson, lead counsel for the Raine family, shared with CNBC that OpenAI has yet to reach out to the family directly to express condolences or discuss measures to enhance the safety of their products.
“If you’re harnessing the power of the most influential consumer technology on the planet, trust is paramount,” Edelson stated. “The question now is: Can OpenAI regain that trust?”
Reports of AI services being linked to suicides are not confined to Raine’s case. Last month, writer Laura Reiley published an essay in The New York Times detailing the suicide of her 29-year-old daughter following extensive conversations about the topic with ChatGPT. In a separate incident in Florida, 14-year-old Sewell Setzer III took his life after discussing it with an AI chatbot on the app Character.AI.
As AI services gain traction and become increasingly integral for emotional support and therapy needs, concerns surrounding their regulation are escalating.
On Monday, a consortium of AI companies, venture capitalists, and executives, including OpenAI President and co-founder Greg Brockman, announced the establishment of Leading the Future, a political initiative aimed at opposing policies that could potentially curb AI innovation.