tech

March 24, 2026

OpenAI releases open-source teen safety tools for AI developers

OpenAI has spent the past year fielding lawsuits from the families of young people who died after extended interactions with ChatGPT. Now it is trying to give the developers who build on top of its models the tools to avoid creating the same problem.

OpenAI releases open-source teen safety tools for AI developers

TL;DR

  • OpenAI has released open-source, prompt-based safety policies to help developers make AI applications safer for teenagers.
  • The policies target five categories of harm: graphic violence/sexual content, harmful body ideals, dangerous activities, romantic/violent role play, and age-restricted goods/services.
  • These tools are designed to assist developers in implementing teen safety rules, a process often challenging even for experienced teams.
  • The release comes amid lawsuits alleging ChatGPT's contribution to the deaths of young users, prompting OpenAI to enhance its safety features.
  • OpenAI emphasizes these policies establish a 'meaningful safety floor' rather than a comprehensive solution, acknowledging the ongoing challenge of impenetrable AI guardrails.

Continue reading the original article

Made withNostr