The Inference Report

April 16, 2026

Claude Opus 4.6 has moved from fourth to first place on SWE-rebench, reaching 65.3% after climbing 12.3 percentage points from its prior 53% on Artificial Analysis, while Gemini 3.1 Pro Preview has dropped from the top position to sixth at 62.3%, down 4.9 points from its previous 57.2%. The top tier shows compression rather than separation: positions two through five cluster between 62.7% and 64.4%, with gpt-5.2-2025-12-11-medium at 64.4%, GLM-5 and gpt-5.4-2026-03-05-medium both at 62.8%, and GLM-5.1 at 62.7%. Mid-tier volatility appears more pronounced, with Kimi K2.5 jumping from 20th (46.8) to 16th (58.5%), Kimi K2 Thinking climbing from 42nd (40.9) to 21st (57.4%), and GLM-4.7 surging from 34th (42.1) to 14th (58.7%), suggesting these models either benefited from task-specific improvements or the benchmark methodology shifted to reward their particular strengths. The Artificial Analysis rankings show smaller absolute movements across the full list, with most models holding positions within a few slots, which raises a question about whether SWE-rebench and Artificial Analysis are measuring overlapping but distinct problem spaces or whether one benchmark has higher variance than the other. Without clarity on whether SWE-rebench underwent evaluation changes or the models received updates, the magnitude of these shifts makes it difficult to separate genuine capability improvements from benchmark drift.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5GLM-5.162.7%
6Gemini 3.1 Pro Preview62.3%
7DeepSeek-V3.260.9%
8Claude Sonnet 4.660.7%
9Claude Sonnet 4.560.0%
10Qwen3.5-397B-A17B59.9%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2122$4.50
2GPT-5.456.869$5.63
3GPT-5.3 Codex53.665$4.81
4Claude Opus 4.65343$10.00
5Muse Spark52.10$0.00
6Claude Sonnet 4.651.748$6.00
7GLM-5.151.443$2.15
8GPT-5.251.361$4.81
9Qwen3.6 Plus5053$1.13
10GLM-549.861$1.55

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview173
2GPT-5 Codex164
3GPT-5.1 Codex160
4GPT-5.4 nano156
5GPT-5.4 mini153
6Grok 4.20 0309141
7Grok 4.20 0309 v2139
8Gemini 3 Pro Preview126
9Gemini 3.1 Pro Preview122
10Qwen3.5 122B A10B119

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9GLM-4.7$1.00
10Kimi K2 Thinking$1.07