OpenAI is expanding its cybersecurity AI program and launching a new specialized model
The Trusted Access for Cyber program will scale to thousands of verified defenders, with the new model offering fewer capability restrictions

NurPhoto / Getty Images
A new model called GPT-5.4-Cyber and a broader rollout of the Trusted Access for Cyber (TAC) program are at the center of OpenAI's latest push into defensive security — extending verified access to thousands of individual professionals and hundreds of teams guarding critical software, with the model offering relaxed capability restrictions compared to standard deployments.
GPT-5.4-Cyber is a variant of GPT-5.4 trained to be more permissive for legitimate security tasks. It includes binary reverse engineering capabilities, which allow security professionals to analyze compiled software for malware, vulnerabilities, and security robustness without access to source code. Because the model carries higher risk than standard deployments, OpenAI said it is beginning with a limited rollout to vetted security vendors, organizations, and researchers.
Related Content
On the access side, the program now distinguishes between multiple verification pathways: individuals can confirm their credentials at chatgpt.com/cyber, enterprises can go through an OpenAI representative to bring their entire team into the program, and existing TAC participants who undergo further authentication as genuine security defenders become eligible to request GPT-5.4-Cyber.
OpenAI said the program's expansion comes ahead of the release of more capable models in the coming months. The company said GPT-5.4 has been classified as a "high" cyber capability model under its Preparedness Framework, and that cyber-specific safety training began with GPT-5.2 before being expanded through subsequent releases.
The company also reported that Codex Security — a tool that automatically monitors codebases, validates issues, and proposes fixes — has contributed to more than 3,000 critical and high fixed vulnerabilities since its recent launch. OpenAI said access to more permissive models may come with limitations, including restrictions on Zero-Data Retention uses where the company has less visibility into how the model is being deployed.
OpenAI first disclosed plans for the TAC program and a cybersecurity-focused model earlier this year, positioning itself against Anthropic in the market for AI systems built for security work. Anthropic's competing program, Project Glasswing, set aside up to $100 million in usage credits for its Mythos model and limited its initial rollout to twelve partners — including Amazon $AMZN Web Services, Apple $AAPL, Cisco $CSCO, CrowdStrike $CRWD, Google $GOOGL, JPMorganChase, Microsoft $MSFT, and Nvidia $NVDA — each contractually bound to use the model for defensive security work only.
OpenAI's cybersecurity efforts also include a $10 million Cybersecurity Grant Program and free security scanning for open-source projects through Codex for Open Source, which has reached more than 1,000 projects, the company said.