tech
April 7, 2026
OpenAI launched a safety fellowship
The OpenAI Safety Fellowship, announced on 6 April 2026, is a pilot programme for external researchers to conduct independent work on AI safety and alignment. It runs from September 2026 to February 2027.

TL;DR
- OpenAI announced the OpenAI Safety Fellowship, a pilot program for external researchers on AI safety and alignment.
- The fellowship runs from September 2026 to February 2027, providing stipends and resources.
- Applications close on May 3, with successful candidates notified by July 25.
- Priority research areas include safety evaluation, robustness, and misuse domains.
- The announcement follows reports of OpenAI dissolving its superalignment and AGI-readiness teams.
- Key safety researchers have departed OpenAI, with concerns raised about the company's safety culture.
- The word 'safely' was reportedly removed from OpenAI's mission statement in IRS filings.
- The effectiveness of an external fellowship program as a substitute for in-house research is being debated.
Continue reading the original article