The Inference Report

March 24, 2026

Claude Opus 4.6 jumped from fourth to first on SWE-rebench, posting 65.3% versus its previous 53%, while gpt-5.2-2025-12-11-medium climbed to second at 64.4%, up from 51.0%. GLM-5 reached third at 62.8% after sitting at 49.8%, and DeepSeek-V3.2 vaulted from twenty-first at 37.5% to sixth at 60.9%, a 23.4-point gain that represents the largest movement in the cohort. The top tier has consolidated upward across the board: Gemini 3.1 Pro Preview slipped to fifth despite improving from 57.2% to 62.3%, a reversal that signals the benchmark itself has tightened or recalibrated rather than merely reflecting model improvement. Claude Code, which led the prior ranking at 52.9%, now sits fourteenth at 58.4%, and Junie fell from second at 52.1% to eleventh at 59.5%, both absolute gains masked by steeper climbs elsewhere. The Artificial Analysis benchmark shows less volatility: GPT-5.4 and Gemini 3.1 Pro Preview tie at 57.2%, unchanged from the previous period, while Claude Opus 4.6 holds at 53 in that metric despite its 12.3-point leap on SWE-rebench. The divergence between benchmarks suggests SWE-rebench scores are shifting faster than Artificial Analysis scores, which may reflect either different evaluation methodologies, different task distributions, or recalibration of the SWE-rebench itself. Twelve new entries appeared in the SWE-rebench top twenty-eight, including gpt-5.4-2026-03-05-medium at fourth and Qwen3.5-397B-A17B at ninth, while multiple prior top performers dropped entirely, indicating substantial churn in which models are being evaluated. Without access to the specific SWE-rebench problem set or scoring methodology changes between periods, the magnitude of these shifts warrants scrutiny: improvements of 10 to 20 points across the board could reflect genuine model advances, or they could signal that the benchmark has been modified, expanded, or re-baselined in ways that affect comparability.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.457.282$5.63
2Gemini 3.1 Pro Preview57.2117$4.50
3GPT-5.3 Codex5465$4.81
4Claude Opus 4.65347$10.00
5Claude Sonnet 4.651.754$6.00
6GPT-5.251.367$4.81
7GLM-549.889$1.55
8Claude Opus 4.549.753$10.00
9MiniMax-M2.749.644$0.525
10MiMo-V2-Pro49.20$0.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1GPT-5.4 mini233
2GPT-5.4 nano221
3Gemini 3 Flash Preview185
4GPT-5 Codex170
5Qwen3.5 122B A10B152
6MiMo-V2-Flash139
7GPT-5.1 Codex127
8GPT-5.1124
9Gemini 3 Pro Preview119
10Gemini 3.1 Pro Preview117

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5MiniMax-M2.5$0.525
6GPT-5 mini$0.688
7Qwen3.5 27B$0.825
8GLM-4.7$1.00
9Kimi K2 Thinking$1.07
10Qwen3.5 122B A10B$1.10