tech

April 15, 2026

OpenAI releases GPT-5.4-Cyber for vetted security teams, scaling Trusted Access programme

In short: OpenAI is releasing GPT-5.4-Cyber, a model fine-tuned for defensive cybersecurity with lowered refusal boundaries and binary reverse engineering capabilities, and scaling its Trusted Access for Cyber programme to thousands of verified defenders. The move comes a week after Anthropic restricted its more powerful Mythos model to just 11 organisations, setting up a philosophical split: OpenAI bets on broad verified access while Anthropic opts for tightly gated deployment.

OpenAI releases GPT-5.4-Cyber for vetted security teams, scaling Trusted Access programme

TL;DR

  • OpenAI released GPT-5.4-Cyber, a variant of GPT-5.4 fine-tuned for defensive cybersecurity.
  • The model features a lower refusal boundary for queries related to vulnerability research, exploit analysis, and malware behavior.
  • GPT-5.4-Cyber includes binary reverse engineering capabilities for analyzing compiled software.
  • OpenAI is expanding its Trusted Access for Cyber (TAC) program to include thousands of verified defenders and hundreds of teams.
  • TAC utilizes an identity-and-trust framework with verification tiers to grant access to more capable models.
  • This release is seen as a direct response to Anthropic's Project Glasswing, which restricts access to its powerful Claude Mythos Preview model.
  • OpenAI's strategy emphasizes democratized access through verification, contrasting with Anthropic's approach of severe access restrictions.
  • The dual-use nature of cybersecurity AI, where capabilities benefit both defenders and attackers, is a key challenge addressed by OpenAI's verification and monitoring approach.
  • Top-tier users of GPT-5.4-Cyber may be required to waive Zero-Data Retention, allowing OpenAI visibility into their usage.

Continue reading the original article

Made withNostr