The Inference Report

March 21, 2026

The SWE-rebench rankings show minimal movement at the top tier, with Claude Code holding 52.9% and the next three positions separated by less than 1.2 percentage points, suggesting performance has plateaued in a narrow band where incremental gains demand substantial effort. Below the top tier, volatility emerges: Claude Opus 4.5 dropped from 49.7% on Artificial Analysis to 43.8% on SWE-rebench (a 5.9-point gap that flags a possible methodology divergence between the two benchmarks), while Gemini 3 Pro Preview fell from 48.4% to 46.7%, and Kimi K2.5 contracted from 46.8% to 37.9%, suggesting these models may perform differently on SWE-rebench's specific test distribution or problem types. Conversely, Kimi K2 Thinking climbed 2.9 points from 40.9% to 43.8%, and GLM-4.6 gained 4.6 points from 32.5% to 37.1%, indicating selective improvements in certain architectures. The Artificial Analysis leaderboard itself remained largely stable in its top 20, with new entry mimo-v2-omni appearing at rank 22 and Nanbeige4.1-3B at rank 169, but these additions occupy middle and lower positions where churn is expected. The key signal is not absolute ranking changes but the widening discrepancy between the two benchmarks: models ranking identically on both (like the top five) inspire confidence, while models showing 5+ point spreads warrant scrutiny into whether SWE-rebench is measuring a materially different capability or whether evaluation methodology accounts for the delta.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Code52.9%
2Junie52.1%
3Claude Opus 4.651.7%
4gpt-5.2-2025-12-11-xhigh51.7%
5gpt-5.2-2025-12-11-medium51.0%
6gpt-5.1-codex-max48.5%
7Claude Sonnet 4.547.1%
8Gemini 3 Pro Preview46.7%
9Gemini 3 Flash Preview46.7%
10gpt-5.2-codex45.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.457.286$5.63
2Gemini 3.1 Pro Preview57.2117$4.50
3GPT-5.3 Codex5474$4.81
4Claude Opus 4.65354$10.00
5Claude Sonnet 4.651.770$6.00
6GPT-5.251.372$4.81
7GLM-549.883$1.55
8Claude Opus 4.549.759$10.00
9MiniMax-M2.749.643$0.525
10MiMo-V2-Pro49.20$0.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1GPT-5.4 mini254
2GPT-5.4 nano213
3Gemini 3 Flash Preview199
4Grok 4.20 Beta 0309192
5GPT-5 Codex184
6Qwen3.5 122B A10B153
7MiMo-V2-Flash137
8Gemini 3.1 Pro Preview117
9Gemini 3 Pro Preview115
10GPT-5.1 Codex102

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5MiniMax-M2.5$0.525
6GPT-5 mini$0.688
7Qwen3.5 27B$0.825
8GLM-4.7$1.00
9Kimi K2 Thinking$1.07
10Qwen3.5 122B A10B$1.10