OpenAI simply launched a brand new analysis preview referred to as GPT-5.3 Codex-Spark. This mannequin is constructed for 1 factor: excessive pace. Whereas the usual GPT-5.3 Codex focuses on deep reasoning, Spark is designed for near-instant response occasions. It’s the results of a deep hardware-software integration between OpenAI and Cerebras.
The outcomes are game-changing. Spark is 15x sooner than the flagship GPT-5.3 Codex. It constantly delivers over 1000 tokens per second. This pace successfully removes the delay between a developer’s thought and the mannequin’s code output.
The {Hardware}: Wafer-Scale Engineering
The huge efficiency bounce is powered by the Cerebras Wafer-Scale Engine 3 (WSE-3). Conventional AI fashions run on clusters of small GPUs. These GPUs should talk to one another over cables, which creates a ‘bottleneck.’ This bottleneck slows down the pace of the mannequin.
The WSE-3 is totally different. It’s a single, big chip the dimensions of a complete silicon wafer. As a result of the complete mannequin lives on 1 piece of silicon, there aren’t any cables to sluggish it down. This structure gives:
- Huge on-chip reminiscence.
- Extremely-high bandwidth.
- Low-latency compute.
Through the use of the Cerebras CS-3 system, OpenAI can run inference at speeds that conventional GPU clusters can not attain.
Software program Optimizations and Low Latency
Velocity isn’t just concerning the chip. OpenAI re-engineered the best way the mannequin communicates along with your laptop. They moved away from conventional request strategies and launched a persistent WebSocket connection.
This variation results in a number of technical enhancements:
- Spherical-Journey Time (RTT): Shopper-server overhead is decreased by 80%.
- Time-to-First-Token (TTFT): That is improved by 50%, which means the code begins showing virtually the second you hit enter.
- Per-Token Overhead: Inner processing time per token is lower by 30%.
These optimizations enable for ‘Actual-Time Steering.’ You possibly can interrupt the mannequin whereas it’s typing and redirect its logic with out ready for the complete block to complete.
The Commerce-offs: Velocity vs. Reasoning
GPT-5.3 Codex-Spark is optimized for throughput, not deep complexity. It’s a ‘smaller’ mannequin than the flagship GPT-5.3 Codex. Due to this, it has decrease reasoning depth.



Devs ought to concentrate on these efficiency variations:
- Benchmarks: Spark scores decrease on SWE-Bench Professional and Terminal-Bench 2.0 in comparison with the flagship mannequin. It could battle with very complicated, multi-file structure adjustments.
- Safety: Underneath OpenAI’s Preparedness Framework, the flagship GPT-5.3 Codex is rated as ‘Excessive’ functionality for cybersecurity. Spark doesn’t meet this excessive threshold. It shouldn’t be used for delicate safety logic or autonomous authentication duties.
Fast Specs and Entry
Spark is accessible now for ChatGPT Professional customers and builders. You possibly can entry it by way of the next instruments:
- Codex App: Use the mannequin picker to pick out ‘Spark.’
- VS Code Extension: Built-in instantly into the composer.
- CLI: Entry it by way of the command
codex --model gpt-5.3-codex-spark.
| Characteristic | GPT-5.3 Codex-Spark | GPT-5.3 Codex (Flagship) |
| Tokens per Second | 1000+ | ~70 |
| Context Window | 128k | 128k |
| {Hardware} | Cerebras WSE-3 | NVIDIA GPU Clusters |
| Finest For | Quick Iteration | Deep Reasoning / Safety |
Key Takeaways
- Nice Velocity: Spark is 15x sooner than the flagship GPT-5.3 Codex, delivering an unprecedented throughput of over 1,000 tokens per second to allow near-instant code technology.
- Customized Silicon Infrastructure: That is OpenAI’s first mannequin to run on Cerebras Wafer-Scale Engine 3 (WSE-3) {hardware} somewhat than conventional NVIDIA GPUs, utilizing ‘wafer-scale’ reminiscence to remove knowledge bottlenecks.
- Drastic Latency Discount: The combination of a persistent WebSocket connection reduces client-server round-trip overhead by 80% and improves the time-to-first-token by 50%.
- Actual-Time Steering: Designed for ‘micro-iterations,’ the mannequin’s pace permits builders to interrupt and redirect logic in real-time, shifting the workflow from batch-processing to reside pair-programming.
- Focused Functionality Commerce-offs: Whereas sooner, Spark has decrease reasoning depth than the flagship mannequin and does not meet the ‘Excessive functionality’ threshold for cybersecurity in OpenAI’s Preparedness Framework, making it unsuitable for delicate auth or safety duties.
Take a look at the Technical particulars right here. Additionally, be at liberty to observe us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you possibly can be a part of us on telegram as properly.

Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s tendencies at present: learn extra, subscribe to our e-newsletter, and develop into a part of the NextTech group at NextTech-news.com

