The Inference Report

April 3, 2026

Claude Opus 4.6 holds the SWE-rebench top position at 65.3%, unchanged from prior measurement, while the Artificial Analysis benchmark shows material reshuffling across the field without shifts at the apex. On SWE-rebench, the top tier remains densely clustered between 60 and 65 percent, with gpt-5.2-2025-12-11-medium at 64.4% and GLM-5 and gpt-5.4-2026-03-05-medium both at 62.8%, suggesting diminishing returns in the high-performance region. Gemini 3.1 Pro Preview, ranked fifth on SWE-rebench at 62.3%, dropped from second on Artificial Analysis (57.2) but the discrepancy itself warrants scrutiny: the two benchmarks measure different problem domains and evaluation conditions, so direct ranking comparisons across them carry limited meaning. Notable climbers on Artificial Analysis include Kimi K2 Thinking, which jumped from position 37 (40.9) to position 17 (57.4), and Kimi K2.5, moving from 16 (46.8) to 13 (58.5), both suggesting Kimi's reasoning variants now handle the Artificial Analysis task distribution more effectively. The SWE-rebench methodology itself remains opaque in the provided data: without details on how tasks are selected, whether they stress particular failure modes, or how the evaluation handles partial credit, the stability of top rankings could reflect either genuine performance plateaus or ceiling effects in the benchmark design. The Artificial Analysis list's expansion to 340 entries and reordering throughout suggests either new model submissions or recalibration, but the data does not clarify which. Meaningful movement exists in the middle ranks on both benchmarks, but the absence of methodological documentation limits interpretation of whether these shifts reflect true capability divergence or measurement artifacts.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.457.274$5.63
2Gemini 3.1 Pro Preview57.2117$4.50
3GPT-5.3 Codex5465$4.81
4Claude Opus 4.65348$10.00
5Claude Sonnet 4.651.755$6.00
6GPT-5.251.370$4.81
7GLM-549.857$1.55
8Claude Opus 4.549.751$10.00
9MiniMax-M2.749.640$0.525
10MiMo-V2-Pro49.20$1.50

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Grok 4.20 Beta 0309248
2GPT-5.4 nano195
3Gemini 3 Flash Preview190
4GPT-5.4 mini182
5GPT-5 Codex162
6GPT-5.1 Codex144
7Qwen3.5 122B A10B131
8MiMo-V2-Flash123
9Gemini 3.1 Pro Preview117
10Gemini 3 Pro Preview115

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9GLM-4.7$1.00
10Kimi K2 Thinking$1.07