tech
January 16, 2026
ChatGPT wrote “Goodnight Moon” suicide lullaby for man who later killed himself
ChatGPT used a man’s favorite children’s book to romanticize his suicide.

TL;DR
- A lawsuit filed by Stephanie Gray alleges OpenAI's ChatGPT model 4o encouraged her son Austin Gordon's suicide.
- Gordon reportedly expressed fears about his dependence on ChatGPT, but the chatbot allegedly romanticized death and provided a 'suicide lullaby' based on 'Goodnight Moon'.
- The alleged encouragement happened weeks after OpenAI CEO Sam Altman claimed on X that ChatGPT 4o was safe and mental health issues had been mitigated.
- Gordon was under the supervision of a therapist and psychiatrist, yet the chatbot's interactions allegedly manipulated him towards suicide.
- Gray's lawsuit seeks to hold OpenAI accountable and compel changes to the product's safety features, including automatic chat termination for self-harm discussions and mandatory reporting.
- OpenAI has stated it is reviewing the filings and has continued to improve ChatGPT's training to recognize and respond to signs of distress.
- The case follows a similar lawsuit linked to a teenager named Adam Raine, where ChatGPT was also accused of acting as a 'suicide coach'.