San Francisco, February 8, 2026, 22:29 PST
- OpenAI is releasing GPT-5.3-Codex to paying ChatGPT customers but restricting wider availability due to cybersecurity risks
- The company is testing a vetted “Trusted Access for Cyber” initiative and providing $10 million in API credits to support defensive efforts
- This launch arrives while competitors like Anthropic are embedding agentic coding tools more deeply into developer workflows
OpenAI has started deploying GPT-5.3-Codex, a fresh “agentic” coding model that goes beyond just writing code — it can actually perform tasks on a computer. At the same time, the company is beefing up safeguards around its most sensitive cybersecurity applications.
This shift is critical as AI coding tools evolve beyond simple autocomplete functions to systems capable of executing commands, retrieving data, and handling extended tasks. Such capabilities appeal to software teams racing to meet deadlines—and to attackers eager to accelerate their search for vulnerabilities.
OpenAI’s position sends a clear message to its competitors. The company aims to advance capabilities while preventing widespread automated abuse—a balance that’s growing tougher as models gain more autonomy.
OpenAI’s system card reveals that GPT-5.3-Codex is its first release classified as “High capability” in cybersecurity under the company’s Preparedness Framework. Despite this, OpenAI is implementing extra safeguards, noting it hasn’t yet confirmed the model fully meets their “High” standard. 1
OpenAI’s product update revealed that GPT-5.3-Codex hit new benchmark highs in real-world coding and computer tasks, scoring 56.8% on SWE-Bench Pro, 77.3% on Terminal-Bench 2.0, and 64.7% on OSWorld-Verified. The company also noted that early versions of the model assisted in debugging its own training and deployment processes. 2
OpenAI is marketing the model as more than just a developer tool. According to eWeek, it’s aimed at everything from debugging, deployment, and monitoring to drafting product requirement documents, executing tests, and building presentations and spreadsheets. The company emphasizes stronger monitoring and access controls to handle cybersecurity tasks safely.
OpenAI’s “Trusted Access for Cyber” page outlines the pilot as an identity- and trust-based framework designed to ensure enhanced cyber tools end up “in the right hands,” especially as models begin operating autonomously “for hours or even days.” The company also pledged $10 million in API credits to accelerate defensive research. 3
OpenAI CEO Sam Altman announced on X that GPT-5.3-Codex is “our first model that hits ‘high’ for cybersecurity on our preparedness framework,” according to Fortune. The outlet noted that OpenAI isn’t opening full API access right away, limiting automation at scale for high-risk cases. Instead, they’re restricting the most sensitive features through a trusted-access program aimed at vetted security experts. 4
The rollout heats up the battle over agentic coding. TechCrunch revealed that OpenAI dropped GPT-5.3-Codex just minutes after Anthropic rolled out its own model. Both had originally aimed for a simultaneous launch, but Anthropic sped things up by 15 minutes. 5
But the real challenge lies ahead. If safeguards aren’t airtight—or if capabilities slip out via indirect means like tool chains and automation—the very systems designed to identify and patch bugs might instead accelerate the hunt for exploitable vulnerabilities. OpenAI is banking on the idea that stricter controls won’t drive developers to seek out less-restricted competitors.
OpenAI says it will expand access when it can be done safely, but the timeline depends heavily on how these new controls perform in practice and how quickly users push for automation beyond just the chat interface.