The Inference Report

April 25, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, unchanged from the previous measurement, while the Artificial Analysis benchmark shows no structural movement in the top tier either, with GPT-5.5 remaining at 60.2 and Claude Opus 4.6 fixed at 53. The most notable shifts occur in the middle tiers of the Artificial Analysis leaderboard, where DeepSeek V4 Pro enters at rank 12 with 51.5, a new entry that suggests incremental capability expansion in the reasoning-focused model space, though the score itself represents performance consistent with existing peers rather than a departure. On SWE-rebench, models like GLM-5 (62.8%), gpt-5.4-2026-03-05-medium (62.8%), and GLM-5.1 (62.7%) cluster tightly in the 62 to 63 percent range, indicating a plateau in absolute gains at the frontier, where further differentiation requires sub-point precision. Gemini 3.1 Pro Preview dropped from rank 3 to rank 6 on SWE-rebench (57.2 to 62.3%), a 4.9-point decline that warrants scrutiny into whether the evaluation conditions or test set composition shifted, as such movements in established models typically signal methodology changes rather than genuine capability loss. The Artificial Analysis benchmark, which tracks a broader set of models, shows no entries with scores above 60.2, creating a visible gap between the two benchmarks' top performers that suggests they may be measuring different problem distributions or difficulty profiles; SWE-rebench appears to emphasize code generation under constraints that newer models handle more effectively, while Artificial Analysis may weight reasoning or multi-step tasks more heavily. Neither benchmark exhibits the velocity that would indicate a meaningful breakthrough, and the lack of new entrants at the very top suggests the field is consolidating rather than expanding capability frontiers.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5GLM-5.162.7%
6Gemini 3.1 Pro Preview62.3%
7DeepSeek-V3.260.9%
8Claude Sonnet 4.660.7%
9Claude Sonnet 4.560.0%
10Qwen3.5-397B-A17B59.9%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.560.2113$11.25
2Claude Opus 4.757.366$10.00
3Gemini 3.1 Pro Preview57.2136$4.50
4GPT-5.456.883$5.63
5Kimi K2.653.9126$1.71
6MiMo-V2.5-Pro53.865$1.50
7GPT-5.3 Codex53.683$4.81
8Claude Opus 4.65360$10.00
9Muse Spark52.10$0.00
10Qwen3.6 Max Preview51.834$2.92

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview203
2Qwen3.6 35B A3B200
3GPT-5 Codex198
4GPT-5.4 mini192
5GPT-5.1 Codex187
6GPT-5.4 nano149
7Qwen3.5 122B A10B146
8Gemini 3.1 Pro Preview136
9Gemini 3 Pro Preview131
10GPT-5.1131

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V4 Flash$0.175
3DeepSeek V3.2$0.315
4GPT-5.4 nano$0.463
5MiniMax-M2.7$0.525
6KAT Coder Pro V2$0.525
7MiniMax-M2.5$0.525
8GPT-5 mini$0.688
9Qwen3.5 27B$0.825
10Qwen3.6 35B A3B$0.844