tech

January 15, 2026

An OpenAI safety research lead departed for Anthropic

Posts from this topic will be added to your daily email digest and your homepage feed.

An OpenAI safety research lead departed for Anthropic

TL;DR

  • Andrea Vallone has joined Anthropic's alignment team.
  • Vallone previously led safety research at OpenAI, focusing on mental health concerns in chatbot conversations.
  • Her work at OpenAI involved developing safety techniques and policies for models like GPT-4 and GPT-5.
  • Vallone will focus on alignment and fine-tuning to shape Anthropic's Claude model's behavior.
  • Her move occurs amid significant controversy regarding AI chatbots and user mental health, with some users experiencing negative outcomes.
  • Anthropic's alignment team is tasked with understanding and addressing AI models' major risks.