The Inference Report

April 10, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, a 12.3 percentage point gain from its previous ranking of 53 on Artificial Analysis, while gpt-5.2-2025-12-11-medium sits at 64.4% and GLM-5 climbed to 62.8% from 49.8 on the older benchmark. The movement reflects a fundamental divergence between the two evaluation frameworks: SWE-rebench appears to measure software engineering capability through a different task distribution or methodology than Artificial Analysis, producing a reshuffling that extends beyond the top tier. Gemini 3.1 Pro Preview dropped from #2 on Artificial Analysis (57.2) to #5 on SWE-rebench (62.3), while Kimi K2.5 jumped from #20 (46.8) to #13 (58.5), and Kimi K2 Thinking advanced from #42 (40.9) to #17 (57.4), suggesting these models perform meaningfully better on the specific coding problems SWE-rebench isolates. The Artificial Analysis benchmark may weight different problem categories or evaluation criteria, or SWE-rebench may employ stricter pass conditions, but without documentation of the methodological differences between these frameworks, the magnitude of score inflation across the board (top models gaining 8-12 points) raises questions about whether the benchmarks are measuring comparable constructs or if one applies easier evaluation criteria. The consistency of Claude's dominance across both rankings and the clustering of multiple models in the 58-62% band on SWE-rebench suggests the benchmark has resolution to differentiate models, but direct score comparison between the two systems is unreliable.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2125$4.50
2GPT-5.456.879$5.63
3GPT-5.3 Codex53.674$4.81
4Claude Opus 4.65350$10.00
5Muse Spark52.10$0.00
6Claude Sonnet 4.651.762$6.00
7GLM-5.151.465$2.15
8GPT-5.251.365$4.81
9Qwen3.6 Plus5052$1.13
10GLM-549.870$1.55

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Grok 4.20 0309 v2192
2GPT-5.4 nano190
3Grok 4.20 0309186
4Gemini 3 Flash Preview176
5GPT-5 Codex168
6GPT-5.1 Codex166
7GPT-5.4 mini160
8Gemini 3 Pro Preview139
9MiMo-V2-Flash129
10Qwen3.5 122B A10B128

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9GLM-4.7$1.00
10Kimi K2 Thinking$1.07