The Inference Report

March 19, 2026

The SWE-rebench and Artificial Analysis benchmarks show minimal movement at the top tier, with Claude Code holding 52.9% on SWE-rebench while the next five models cluster within 0.8 percentage points, a pattern that has held steady. At Artificial Analysis, GPT-5.4 and Gemini 3.1 Pro Preview both score 57.2%, with the third-place GPT-5.3 Codex at 54, but the rankings below position 10 reveal more volatility: Kimi K2 Thinking jumped from position 30 to 13 on SWE-rebench with a 2.9-point gain to 43.8%, while Claude Opus 4.5 dropped from position 8 to 12 on SWE-rebench, losing 5.9 points to 43.8%, and GLM-5 fell from position 7 to 15 on SWE-rebench, declining 7.7 points to 42.1%. On Artificial Analysis, most entries shifted by single positions rather than substantial score changes, with new entries like MiniMax-M2.7 (49.6), GPT-5.4 mini (48.1), and GPT-5.4 nano (44.4) appearing in the upper rankings and Sarvam 105B (18.2) and Sarvam 30B (12.4) entering further down. The SWE-rebench benchmark's methodology remains unclear from the data provided, limiting assessment of whether these movements reflect genuine capability differences or testing variance; the compression of scores in the 40-52% range on SWE-rebench suggests either a ceiling effect or a narrowing gap between frontier models, while Artificial Analysis's broader distribution and deeper ranking list indicate different evaluation criteria or model coverage.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Code52.9%
2Junie52.1%
3Claude Opus 4.651.7%
4gpt-5.2-2025-12-11-xhigh51.7%
5gpt-5.2-2025-12-11-medium51.0%
6gpt-5.1-codex-max48.5%
7Claude Sonnet 4.547.1%
8Gemini 3 Pro Preview46.7%
9Gemini 3 Flash Preview46.7%
10gpt-5.2-codex45.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.457.271$5.63
2Gemini 3.1 Pro Preview57.2112$4.50
3GPT-5.3 Codex5466$4.81
4Claude Opus 4.65351$10.00
5Claude Sonnet 4.651.756$6.00
6GPT-5.251.366$4.81
7GLM-549.865$1.55
8Claude Opus 4.549.756$10.00
9MiniMax-M2.749.643$0.525
10MiMo-V2-Pro49.20$0.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1GPT-5.4 mini246
2GPT-5.4 nano231
3Grok 4.20 Beta 0309197
4Gemini 3 Flash Preview179
5GPT-5 Codex166
6MiMo-V2-Flash131
7Qwen3.5 122B A10B121
8Gemini 3.1 Pro Preview112
9Gemini 3 Pro Preview108
10GPT-5.1 Codex95

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5MiniMax-M2.5$0.525
6GPT-5 mini$0.688
7Qwen3.5 27B$0.825
8GLM-4.7$1.00
9Kimi K2 Thinking$1.07
10Qwen3.5 122B A10B$1.10