The Inference Report

April 11, 2026

Claude Opus 4.6 moved from fourth to first on SWE-rebench, improving from 53% to 65.3%, a 12.3-point gain that positions it ahead of gpt-5.2-2025-12-11-medium at 64.4%. GLM-5 climbed from tenth to third, jumping 13 points to 62.8%, matching gpt-5.4-2026-03-05-medium. Gemini 3.1 Pro Preview fell from first to fifth despite scoring 62.3%, suggesting the benchmark may have tightened or other models benefited from architectural changes rather than uniform improvement across the field. Kimi K2.5 advanced from twentieth to thirteenth with a 11.7-point increase to 58.5%, and Kimi K2 Thinking jumped from forty-second to seventeenth with a 16.5-point gain to 57.4%. The Artificial Analysis rankings show minimal movement in the top tier, with most models holding their positions, suggesting SWE-rebench and Artificial Analysis are measuring different problem spaces or that SWE-rebench's methodology may be capturing recent architectural improvements that general-purpose benchmarks have not yet registered. The concentration of gains among Claude and Kimi variants, paired with Gemini's relative decline, hints at task-specific optimization rather than across-the-board capability expansion. Without methodological details on how SWE-rebench was constructed or modified, it is unclear whether these shifts reflect genuine progress on software engineering tasks or whether the benchmark itself has shifted in composition.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2124$4.50
2GPT-5.456.880$5.63
3GPT-5.3 Codex53.675$4.81
4Claude Opus 4.65349$10.00
5Muse Spark52.10$0.00
6Claude Sonnet 4.651.750$6.00
7GLM-5.151.457$2.15
8GPT-5.251.365$4.81
9Qwen3.6 Plus5049$1.13
10GLM-549.870$1.55

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Grok 4.20 0309 v2191
2Grok 4.20 0309185
3GPT-5.4 nano178
4Gemini 3 Flash Preview176
5GPT-5.1 Codex175
6GPT-5 Codex171
7GPT-5.4 mini160
8Qwen3.5 122B A10B136
9Gemini 3 Pro Preview134
10Gemini 3.1 Pro Preview124

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9GLM-4.7$1.00
10Kimi K2 Thinking$1.07