The Inference Report

April 2, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, up from fourth place and 53% on the prior Artificial Analysis benchmark, representing a 12.3-point gain that reflects either improved model capability or a meaningful shift in how the two benchmarks weight code generation tasks. The top five models cluster tightly between 62.3% and 65.3% on SWE-rebench, with gpt-5.2-2025-12-11-medium at 64.4%, GLM-5 and gpt-5.4-2026-03-05-medium both at 62.8%, and Gemini 3.1 Pro Preview at 62.3%, all of which moved up relative to their prior rankings. Kimi K2.5 climbed from 16th place (46.8 points) to 13th (58.5%), while Kimi K2 Thinking advanced from 36th (40.9 points) to 17th (57.4%), suggesting that reasoning-focused variants are closing gaps on general-purpose models in software engineering tasks. The two benchmarks diverge materially in their orderings: gpt-5.4 ranks first on Artificial Analysis at 57.2 but fourth on SWE-rebench at 62.8, while Gemini 3.1 Pro Preview ties at first on Artificial Analysis but ranks fifth on SWE-rebench, implying that SWE-rebench either captures different failure modes in code generation or applies stricter evaluation criteria around execution correctness rather than response quality alone. The spread between top and middle performers narrowed on SWE-rebench (Claude Opus to Step-3.5-Flash spans 5.7 percentage points) compared to Artificial Analysis (GPT-5.4 to MiMo-V2-Pro spans 8 points), which could indicate that SWE-rebench has tighter clustering due to smaller sample sizes or more binary pass-fail scoring, though neither benchmark publication provides explicit methodology details sufficient to confirm the source of the divergence.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.457.275$5.63
2Gemini 3.1 Pro Preview57.2117$4.50
3GPT-5.3 Codex5467$4.81
4Claude Opus 4.65351$10.00
5Claude Sonnet 4.651.754$6.00
6GPT-5.251.370$4.81
7GLM-549.857$1.55
8Claude Opus 4.549.752$10.00
9MiniMax-M2.749.640$0.525
10MiMo-V2-Pro49.20$1.50

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Grok 4.20 Beta 0309248
2GPT-5.4 nano194
3Gemini 3 Flash Preview191
4GPT-5.4 mini184
5GPT-5 Codex159
6GPT-5.1 Codex138
7Qwen3.5 122B A10B134
8MiMo-V2-Flash123
9Gemini 3.1 Pro Preview117
10Gemini 3 Pro Preview115

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9GLM-4.7$1.00
10Kimi K2 Thinking$1.07