The Inference Report

March 20, 2026

The SWE-rebench top tier remains unchanged from the previous cycle, with Claude Code holding 52.9%, Junie at 52.1%, and Claude Opus 4.6 and gpt-5.2-2025-12-11-xhigh tied at 51.7%, suggesting a plateau in incremental gains among the highest-performing models. Below this ceiling, however, significant volatility emerges: Gemini 3 Pro Preview dropped from 48.4 to 46.7 on Artificial Analysis while maintaining rank #8 on SWE-rebench, whereas Kimi K2 Thinking climbed from 40.9 to 43.8 on Artificial Analysis and rose from #33 to #13 on SWE-rebench, a 20-position jump that points to either methodological divergence between the two benchmarks or genuine capability shifts in specific coding tasks. GLM-5 fell sharply from 49.8 to 42.1 on Artificial Analysis while dropping from #7 to #15 on SWE-rebench, a discrepancy that warrants scrutiny into whether the Artificial Analysis evaluation captures different problem distributions or if the SWE-rebench methodology has tightened. Kimi K2.5 presents the inverse pattern, declining from 46.8 to 37.9 on SWE-rebench but remaining at 46.8 on Artificial Analysis, suggesting the two benchmarks reward different architectural or prompt-handling strategies. The broader pattern indicates that neither benchmark is settling into stable rankings: models in the 35-50% range on SWE-rebench show rank swings of 10-20 positions across cycles, and the divergence between SWE-rebench and Artificial Analysis scores (sometimes 5-10 percentage points) suggests these are measuring meaningfully different aspects of code generation capability rather than converging on a unified signal.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Code52.9%
2Junie52.1%
3Claude Opus 4.651.7%
4gpt-5.2-2025-12-11-xhigh51.7%
5gpt-5.2-2025-12-11-medium51.0%
6gpt-5.1-codex-max48.5%
7Claude Sonnet 4.547.1%
8Gemini 3 Pro Preview46.7%
9Gemini 3 Flash Preview46.7%
10gpt-5.2-codex45.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.457.270$5.63
2Gemini 3.1 Pro Preview57.2117$4.50
3GPT-5.3 Codex5470$4.81
4Claude Opus 4.65356$10.00
5Claude Sonnet 4.651.768$6.00
6GPT-5.251.366$4.81
7GLM-549.874$1.55
8Claude Opus 4.549.760$10.00
9MiniMax-M2.749.643$0.525
10MiMo-V2-Pro49.20$0.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1GPT-5.4 mini254
2GPT-5.4 nano216
3Grok 4.20 Beta 0309200
4Gemini 3 Flash Preview186
5GPT-5 Codex176
6MiMo-V2-Flash134
7Qwen3.5 122B A10B121
8Gemini 3.1 Pro Preview117
9Gemini 3 Pro Preview111
10GPT-5.1 Codex98

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5MiniMax-M2.5$0.525
6GPT-5 mini$0.688
7Qwen3.5 27B$0.825
8GLM-4.7$1.00
9Kimi K2 Thinking$1.07
10Qwen3.5 122B A10B$1.10