OpenAI has launched GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano to its API suite, phasing out GPT-4.5 Preview whereas advancing code technology, instruction following, and long-context processing capabilities.
Primarily signaling the failure of GPT-4.5, the brand new 4.1 fashions introduce context home windows of as much as a million tokens, enabling native dealing with of full repositories, intensive paperwork, and complicated multi-turn agent workflows inside a single name.
Whereas researching this text, I used to be in a position to make use of GPT-4.1 ‘vibe code,’ a easy Python-based dungeon crawler, in 5 minutes and 5 prompts. The mannequin made no errors in its code, with the one points associated to figuring out related sprites within the asset atlas I imported.

Because of its giant context window, it was additionally capable of efficiently establish the performance of a giant code repo inside just a few prompts.
Mannequin Capabilities and Transition Path
Per OpenAI, GPT-4.1 achieves a 54.6% rating on SWE-bench Verified, reflecting the improved capacity to supply runnable code patches that resolve real-world repository points. This outpaces GPT-4o’s 33.2% and GPT-4.5’s 38% beneath the identical benchmark. The mannequin additionally executes code diffs extra exactly, with 53% accuracy on Aider’s polyglot benchmark in diff format, greater than doubling GPT-4o’s 18%.
Instruction-following constancy can also be refined. On Scale’s MultiChallenge, GPT-4.1 reaches 38.3% accuracy, in comparison with 27.8% for GPT-4o. These enhancements embrace adhering to strict output codecs, complying with constraints, and following nested or contradictory directions.
In response to the AI coding platform Windsurf, inner evaluations present that GPT-4.1 produces cleaner diffs and is extra aligned with structured developer workflows.
The fashions’ capacity to course of lengthy contexts contains 1 million token help, surpassing the earlier 128K token window.
To validate this, OpenAI launched MRCR, an open-source analysis that checks a mannequin’s capacity to retrieve particular particulars from inside dense, distractor-heavy context blocks. GPT-4.1 scored 72% on the long-video, no-subtitles class of the Video-MME benchmark, setting a brand new excessive.
Effectivity positive aspects throughout the sequence and agent use
The GPT-4.1 mini mannequin supplies latency and price reductions whereas sustaining comparable efficiency. OpenAI acknowledged that GPT-4.1 mini reduces inference latency by almost 50% and price by 83% relative to GPT-4o, with equal or superior scores on a number of intelligence evaluations.
In the meantime, GPT-4.1 nano, optimized for low-latency duties, achieves 80.1% on MMLU, 50.3% on GPQA, and 9.8% on Aider’s polyglot coding. These positive aspects exceed GPT-4o mini in important areas and place nano to be used in classification, autocomplete, and reactive agentic programs.
There isn’t any added price for long-context use throughout the GPT-4.1 sequence. Token utilization follows commonplace API pricing, permitting builders to scale functions involving giant doc retrieval, repository comprehension, or full challenge enhancing with out premium pricing tiers.
Enhancements in instruction constancy and context retention bolster the mannequin household’s viability for agentic functions. With OpenAI’s Responses API, builders can deploy GPT-4.1-based programs to autonomously execute chained operations corresponding to resolving buyer tickets, mining paperwork for insights, or working throughout multi-step process environments.
GPT-4.5 Preview, launched as a research-focused providing, can be sundown on July 14. In response to OpenAI, suggestions from 4.5’s testing section knowledgeable fine-tuning and deployment configurations, which are actually embodied in GPT-4.1. As such, GPT-4.1 is positioned as a substitute path for builders utilizing 4.5 within the API.
ChatGPT customers will proceed interacting with GPT-4o, the place OpenAI is incrementally integrating instruction-following enhancements. GPT-4.1 fashions, nevertheless, are API-exclusive.
Technical implications for code-first builders
The choice to scale the token context to 1 million is probably going a response to Google’s Gemini 2.1 Professional mannequin. It impacts builders managing giant monorepos, documentation-heavy domains, or multi-file dependency chains.
Along with conventional inference, GPT-4.1’s upgraded token output restrict, now as much as 32,768 tokens, allows single-call full file rewrites, eradicating the necessity for post-processing or fragment merging.
Adherence to structured codecs permits builders to optimize workflows round minimal output technology for code diffs, chopping token prices and rising system responsiveness.
In response to OpenAI’s inner checks, GPT-4.1 has already demonstrated improved manufacturing outcomes throughout frontend growth, authorized parsing, and backend automation.
In comparative evaluations, paid graders most popular GPT-4.1-generated web sites over GPT-4o ends in 80% of take a look at circumstances, citing superior performance and readability in HTML, CSS, and JavaScript output.
GPT-4.1 mini and nano fashions lengthen these advantages to low-resource environments and latency-critical settings. The introduction of nano supplies a fast-reacting, low-cost LLM able to changing bigger fashions in fast iteration pipelines, chat interfaces, or embedded dev instruments.
Builders utilizing GPT-4.5 or GPT-4o mini are suggested to judge migration paths now, as GPT-4.1’s efficiency and token economics favor its adoption in most deployment configurations. Mannequin entry, prompting guides, and up to date benchmarks can be found via the OpenAI developer platform.
Per OpenAI, GPT-4o and GPT-4o mini will proceed to be supported within the API for the foreseeable future, however emphasis is being positioned on the GPT-4.1 line as the popular improve path.