tech
April 7, 2026
“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO
OpenAI brainstorms ways AI can benefit humanity in effort to counter bad vibes.

TL;DR
- OpenAI released policy recommendations for a future with superintelligence, emphasizing human benefit and transparency.
- A New Yorker investigation casts doubt on CEO Sam Altman's trustworthiness, citing insider accounts of deception and manipulation.
- Insiders describe Altman as a people-pleaser with a "sociopathic lack of concern for the consequences" of deceiving others.
- Former chief scientist Ilya Sutskever and research head Dario Amodei documented alleged deceptions and manipulations by Altman.
- Altman disputes claims, attributes narrative shifts to AI landscape changes, and admits to past conflict avoidance.
- OpenAI's policy ideas include shorter work weeks, a public wealth fund, worker protections, and a tax on automated labor.
- Critics suggest OpenAI's proposals may be a distraction from public fears about AI's safety, job displacement, and energy use.
- Altman has privately lobbied against stricter AI safety laws, which could be enacted if certain political parties gain control of Congress.
- The article questions the public's ability to trust OpenAI's vision without trusting its leader.
- Some experts believe Altman sets up structures that he later disregards when they constrain him.
Continue reading the original article