The Inference Report

April 4, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, a gain of 12.3 points from its previous score of 53 on Artificial Analysis, while the Artificial Analysis leaderboard shows minimal movement in the top tier, with GPT-5.4 and Gemini 3.1 Pro Preview both at 57.2. The divergence between the two benchmarks reflects a known challenge in LLM evaluation: SWE-rebench and Artificial Analysis measure different problem distributions and solution strategies, making direct comparisons across methodologies unreliable. On SWE-rebench, the clustering is tight in the top ten, with only 5.7 percentage points separating first from tenth, suggesting the benchmark may be reaching saturation for frontier models or that the test set lacks discriminative power at the high end. GLM-5 and Kimi K2.5 show substantial gains on SWE-rebench (13 and 9.7 points respectively), yet their Artificial Analysis positions remain largely stable, indicating these models may have specialized improvements for code-related tasks rather than across-the-board capability increases. The Artificial Analysis benchmark, which covers a broader evaluation surface, shows Gemma 4 31B and Gemma 4 E4B entering the top 150, suggesting incremental progress in the open-source tier, though the majority of entries below rank 40 shuffle positions without meaningful score changes. Neither benchmark provides sufficient methodological transparency in the data to assess whether score improvements reflect genuine capability gains or dataset-specific optimization, and the absence of error bars or confidence intervals prevents determination of statistical significance for most movements.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.457.276$5.63
2Gemini 3.1 Pro Preview57.2118$4.50
3GPT-5.3 Codex5472$4.81
4Claude Opus 4.65346$10.00
5Claude Sonnet 4.651.752$6.00
6GPT-5.251.370$4.81
7GLM-549.861$1.55
8Claude Opus 4.549.751$10.00
9MiniMax-M2.749.640$0.525
10MiMo-V2-Pro49.20$1.50

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Grok 4.20 Beta 0309245
2GPT-5.4 nano206
3Gemini 3 Flash Preview189
4GPT-5.4 mini185
5GPT-5 Codex172
6GPT-5.1 Codex168
7Qwen3.5 122B A10B137
8Gemini 3 Pro Preview128
9MiMo-V2-Flash125
10Gemini 3.1 Pro Preview118

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9GLM-4.7$1.00
10Kimi K2 Thinking$1.07