The Inference Report

April 23, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, a gain of 12.3 percentage points from its previous ranking of 53% on Artificial Analysis, though these are separate benchmarks measuring different aspects of model capability. On SWE-rebench specifically, the top tier has compressed into a narrow band: gpt-5.2-2025-12-11-medium at 64.4%, GLM-5 and gpt-5.4-2026-03-05-medium tied at 62.8%, and GLM-5.1 at 62.7% occupy positions two through five, with no model scoring below 60% until rank seven. The more significant movement appears in the middle ranks, where GLM-4.7 jumped from position 38 (42.1 on Artificial Analysis) to position 14 (58.7% on SWE-rebench), Kimi K2.5 advanced from position 23 (46.8) to position 16 (58.5%), and Kimi K2 Thinking climbed from position 46 (40.9) to position 21 (57.4%), suggesting these models either improved substantially or benefit from SWE-rebench's evaluation methodology relative to Artificial Analysis scoring. Gemini 3.1 Pro Preview dropped from position 2 on Artificial Analysis to position 6 on SWE-rebench, declining from 57.2 to 62.3%, a counterintuitive result that may reflect differences in how the benchmarks weight problem-solving approaches or test coverage. The SWE-rebench results show less volatility at the extremes than Artificial Analysis, with the bottom ranks similarly stable, but the compression at the top and selective jumps in the middle suggest SWE-rebench either captures a narrower slice of coding ability or rewards specific architectural choices that certain model families exploit more effectively.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5GLM-5.162.7%
6Gemini 3.1 Pro Preview62.3%
7DeepSeek-V3.260.9%
8Claude Sonnet 4.660.7%
9Claude Sonnet 4.560.0%
10Qwen3.5-397B-A17B59.9%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Claude Opus 4.757.362$10.00
2Gemini 3.1 Pro Preview57.2127$4.50
3GPT-5.456.880$5.63
4Kimi K2.653.9135$1.71
5MiMo-V2.5-Pro53.852$1.50
6GPT-5.3 Codex53.677$4.81
7Claude Opus 4.65353$10.00
8Muse Spark52.10$0.00
9Qwen3.6 Max Preview51.838$2.92
10Claude Sonnet 4.651.764$6.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Qwen3.6 35B A3B224
2GPT-5 Codex200
3Gemini 3 Flash Preview195
4GPT-5.4 mini182
5GPT-5.1 Codex177
6Grok 4.20 0309162
7Grok 4.20 0309 v2159
8GPT-5.4 nano152
9Qwen3.5 122B A10B152
10Kimi K2.6135

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9Qwen3.6 35B A3B$0.844
10GLM-4.7$1.00