OpenAI expands Trusted Access for Cyber, unveils GPT‑5.4‑Cyber and pledges $10M in credits

OpenAI is expanding a program that gives vetted users access to more powerful cybersecurity features in its AI systems, while offering $10 million in free usage to security companies and researchers.

In a blog post published April 16 titled “Accelerating the cyber defense ecosystem that protects us all,” the company said it is widening its Trusted Access for Cyber program and rolling out GPT‑5.4‑Cyber, a tuned version of its GPT‑5.4 model that is more “cyber‑permissive” for defensive tasks. OpenAI is also awarding millions of dollars in API credits to security firms and open source projects, and enlisting some of the world’s largest banks and technology companies as partners.

The Trusted Access for Cyber, or TAC, framework is OpenAI’s attempt to manage who can tap the strongest cyber features in its models. First detailed publicly on Feb. 5, TAC uses identity verification, “trust signals” and enhanced monitoring to decide which customers can access more permissive tools. Participants must go through know‑your‑customer checks and agree not to use OpenAI’s systems for cyber abuse. The company says it can suspend or terminate access if those terms are violated.

On April 14, in a post titled “Trusted access for the next era of cyber defense,” OpenAI said it is scaling TAC “to thousands of verified individual defenders and hundreds of teams.” As part of that expansion, the company introduced GPT‑5.4‑Cyber, a variant of its latest model that is tuned to lower refusal barriers for legitimate cybersecurity work. OpenAI says that includes advanced defensive workflows such as binary reverse engineering, a technique used to analyze compiled software for flaws.

Only users in the highest TAC tiers will be allowed to use GPT‑5.4‑Cyber. OpenAI has labeled the underlying GPT‑5.4 model as having “high” cyber capability in its internal Preparedness Framework and says it has added cyber‑specific safeguards progressively across the GPT‑5.x line, starting with GPT‑5.2 and continuing through GPT‑5.3‑Codex to GPT‑5.4.

The April 16 announcement also focuses on money. OpenAI said it is committing $10 million in API credits through its Cybersecurity Grant Program to support defensive work at outside organizations. The program, which the company has run since 2023, was described in February as shifting toward large‑scale deployment of OpenAI models for cyber defense.

Initial recipients of those credits include Socket, Semgrep, Calif and Trail of Bits. Socket and Semgrep develop tools for code and software supply‑chain security, while Calif and Trail of Bits are known for vulnerability research and security engineering. OpenAI has previously said that earlier tools focused on security, including a system called Codex Security, have contributed to thousands of vulnerability fixes, based on its own reporting.

OpenAI is also emphasizing the breadth of its industry backing. According to the April 16 post, organizations that have signed up to support TAC include Bank of America, BlackRock, BNY, Citi, Cisco, CrowdStrike, Goldman Sachs, iVerify, JPMorgan Chase, Morgan Stanley, NVIDIA, Oracle, SpecterOps and Zscaler.

“BNY is committed to helping protect the security and resilience of the financial system as AI capabilities accelerate. We are working closely with those at the forefront of enabling these efforts. Building on our ongoing collaboration with OpenAI, we are pleased to participate in their Trusted Access for Cyber program,” Leigh‑Ann Russell, chief information officer and global head of engineering at BNY, said in the OpenAI post.

Government‑affiliated AI safety labs are also involved. OpenAI says it has granted GPT‑5.4‑Cyber access to the U.S. Center for AI Standards and Innovation, housed at the National Institute of Standards and Technology, and to the UK AI Security Institute. Both organizations are tasked with testing and evaluating advanced AI systems. Their evaluations will focus on the model’s cyber capabilities and the safeguards around it, according to OpenAI.

The move comes as major AI developers are experimenting with different ways to handle tools that can help find and fix software vulnerabilities but could, in principle, assist attackers. In early April, rival AI developer Anthropic limited access to a model called Mythos Preview to a small group of organizations because of concerns about offensive misuse. Media coverage has contrasted Anthropic’s tightly controlled rollout with OpenAI’s TAC approach, which aims for wider, but identity‑gated, access.

Fouad Matin, a cyber researcher at OpenAI, framed the company’s strategy as an attempt to put stronger tools in more defenders’ hands. “This is a team sport, we need to make sure that every single team is empowered to secure their systems,” Matin told reporters, according to Axios. “No one should be in the business of picking winners and losers when it comes to cybersecurity.”

By expanding TAC, launching GPT‑5.4‑Cyber and putting $10 million in credits behind security partners, OpenAI is turning its identity‑based access system into a real‑world test: whether tightly controlled but relatively broad availability of “cyber‑permissive” AI can strengthen defense while containing misuse, under the eye of major banks, security firms and government evaluators.

Tags: #openai, #cybersecurity, #ai, #gpt5