The Inference Report

March 25, 2026

Claude Opus 4.6 maintains its position at the top of the SWE-rebench rankings with 65.3%, unchanged from the previous evaluation, while the tier beneath it shows marginal movement with gpt-5.2-2025-12-11-medium and GLM-5 holding positions two and three at 64.4% and 62.8% respectively. The most significant repositioning occurs in the middle ranks: Kimi K2.5 climbs from position 16 (46.8 on Artificial Analysis) to position 13 (58.5% on SWE-rebench), and Kimi K2 Thinking advances from position 34 (40.9) to position 17 (57.4%), suggesting these models have received targeted improvements in code-related task handling that the Artificial Analysis benchmark has not yet captured. The divergence between SWE-rebench and Artificial Analysis rankings is particularly notable at the top: Claude Opus 4.6 ranks first on SWE-rebench at 65.3% but only fourth on Artificial Analysis at 53, indicating that SWE-rebench measures a narrower, more specialized capability in software engineering tasks where Claude's advantage is pronounced, while Artificial Analysis distributes scores more evenly across a broader capability spectrum. Below the top tier, the rankings remain largely stable, with most models holding their relative positions, though the absolute gaps between SWE-rebench and Artificial Analysis scores widen considerably for lower-ranked models, suggesting that coding benchmarks and general capability benchmarks increasingly diverge as model sophistication decreases. The methodology distinction matters here: SWE-rebench appears to test repository-level engineering tasks with higher fidelity to real-world scenarios, while Artificial Analysis likely employs a different evaluation protocol, making direct score comparison problematic even when rankings partially align.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.457.279$5.63
2Gemini 3.1 Pro Preview57.2114$4.50
3GPT-5.3 Codex5472$4.81
4Claude Opus 4.65349$10.00
5Claude Sonnet 4.651.765$6.00
6GPT-5.251.372$4.81
7GLM-549.872$1.55
8Claude Opus 4.549.757$10.00
9MiniMax-M2.749.645$0.525
10MiMo-V2-Pro49.20$0.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1GPT-5.4 mini222
2GPT-5.4 nano222
3Gemini 3 Flash Preview196
4GPT-5 Codex173
5Qwen3.5 122B A10B156
6MiMo-V2-Flash129
7GPT-5.1123
8Grok 4.20 Beta 0309120
9GPT-5.1 Codex118
10Gemini 3 Pro Preview116

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5MiniMax-M2.5$0.525
6GPT-5 mini$0.688
7Qwen3.5 27B$0.825
8GLM-4.7$1.00
9Kimi K2 Thinking$1.07
10Qwen3.5 122B A10B$1.10