tech

April 6, 2026

Introducing the OpenAI Safety Fellowship

A pilot program to support independent safety and alignment research and develop the next generation of talent

Introducing the OpenAI Safety Fellowship

TL;DR

  • OpenAI is launching the Safety Fellowship for external researchers focusing on AI safety and alignment.
  • The program runs from September 14, 2026, to February 5, 2027.
  • Priority research areas include safety evaluation, ethics, robustness, and misuse domains.
  • Fellows will work with OpenAI mentors, with workspace available in Berkeley or remotely.
  • A monthly stipend, compute support, and mentorship are provided.
  • The fellowship requires a substantial research output like a paper or dataset.
  • Applications are open until May 3, with notifications by July 25.

Continue reading the original article