San Francisco, February 8, 2026, 22:29 PST
- OpenAI is rolling out GPT-5.3-Codex to paid ChatGPT users while holding back broader access for higher-risk cybersecurity use
- The company is piloting a vetted “Trusted Access for Cyber” program and offering $10 million in API credits for defensive work
- The launch lands as rivals, including Anthropic, push agentic coding tools deeper into developer workflows
OpenAI has begun rolling out GPT-5.3-Codex, a new “agentic” coding model designed to do more than write code — including taking actions on a computer — while tightening controls around its most sensitive cybersecurity uses.
The move matters now because AI coding tools are shifting from autocomplete helpers into systems that can run commands, pull data and carry tasks for long stretches. That makes them attractive to software teams under pressure to ship faster, and to attackers looking to speed up probing for weak points.
OpenAI’s stance is also a signal to peers. The company is trying to push capability forward without opening the door to automated misuse at scale, a line that is getting harder to hold as models become more autonomous.
In its system card, OpenAI said GPT-5.3-Codex is the first launch it is treating as “High capability” in the cybersecurity domain under its Preparedness Framework, and that it is activating added safeguards even though it said it does not have definitive evidence the model meets the company’s “High” threshold. (OpenAI)
OpenAI’s product post said GPT-5.3-Codex set new highs on benchmarks it tracks for real-world coding and computer-use tasks, and included results such as 56.8% on SWE-Bench Pro, 77.3% on Terminal-Bench 2.0, and 64.7% on OSWorld-Verified. It also said early versions of the model were used to help debug its own training and deployment work. (OpenAI)
The model is being positioned as broader than a developer tool. EWeek reported OpenAI is pitching it for tasks including debugging, deployment and monitoring, as well as writing product requirement documents, running tests, and creating presentations and spreadsheets, with the company stressing tightened monitoring and access controls for cybersecurity-related work. (eWeek)
OpenAI’s “Trusted Access for Cyber” page describes the pilot as an identity- and trust-based framework aimed at putting enhanced cyber capabilities “in the right hands,” as models move toward working autonomously “for hours or even days.” It also said it is committing $10 million in API credits to speed defensive research. (OpenAI)
OpenAI CEO Sam Altman wrote on X that GPT-5.3-Codex is “our first model that hits ‘high’ for cybersecurity on our preparedness framework,” Fortune reported. The magazine said OpenAI is not immediately enabling full API access that would allow automation at scale for high-risk uses, and is gating more sensitive capabilities behind a trusted-access program for vetted security professionals. (Fortune)
The rollout comes amid an intensifying fight over agentic coding. TechCrunch reported OpenAI released GPT-5.3-Codex minutes after Anthropic launched its own model, after both companies initially planned to publish at the same time and Anthropic moved its release up by 15 minutes. (TechCrunch)
Still, the hard part is what happens next. If safeguards are porous — or if capability leaks through indirect routes like tool chains and automation — the same systems built to find and fix bugs could speed the discovery of exploitable flaws. OpenAI is also betting that tighter gates will not push developers toward less-restricted rivals.
OpenAI has said it plans broader access once it can do so safely, but the timeline will likely hinge on how the new controls hold up in the real world, and how fast customers demand automation that reaches beyond the chat window.