The Inference Report

April 28, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, unchanged from the previous evaluation, while the Artificial Analysis benchmark shows a more fragmented picture with GPT-5.5 leading at 60.2. On SWE-rebench, the top tier remains stable: gpt-5.2-2025-12-11-medium sits at 64.4%, GLM-5 and gpt-5.4-2026-03-05-medium are tied at 62.8%, and GLM-5.1 follows at 62.7%. The notable movements occurred in the mid-range, where GLM-4.7 climbed from rank 42 (42.1 points on Artificial Analysis) to rank 14 on SWE-rebench with 58.7%, GLM-5 jumped from rank 16 to rank 3, Kimi K2.5 advanced from rank 27 to rank 16, and Kimi K2 Thinking moved from rank 51 to rank 21. Gemini 3.1 Pro Preview dropped from rank 3 to rank 6 on SWE-rebench despite scoring 62.3%, suggesting the benchmark may be measuring distinct problem-solving dimensions than Artificial Analysis, where Gemini 3.1 Pro Preview ranks third at 57.2. The divergence between the two benchmarks is substantial: Claude Opus 4.6 scores 65.3% on SWE-rebench but only 53 on Artificial Analysis, a gap of 12.3 points. Across both benchmarks, no model appears to have regressed in absolute performance, though relative rankings shifted due to other models improving or being newly added to the evaluation. The SWE-rebench methodology, which focuses on software engineering tasks, appears to reward models differently than the broader Artificial Analysis evaluation, and without access to the specific test case changes or evaluation protocol updates, it remains unclear whether these movements reflect genuine capability shifts or methodological adjustments to the benchmark itself.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5GLM-5.162.7%
6Gemini 3.1 Pro Preview62.3%
7DeepSeek-V3.260.9%
8Claude Sonnet 4.660.7%
9Claude Sonnet 4.560.0%
10Qwen3.5-397B-A17B59.9%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.560.278$11.25
2Claude Opus 4.757.356$10.00
3Gemini 3.1 Pro Preview57.2135$4.50
4GPT-5.456.886$5.63
5Kimi K2.653.90$1.71
6MiMo-V2.5-Pro53.865$1.50
7GPT-5.3 Codex53.691$4.81
8Claude Opus 4.65348$10.00
9Muse Spark52.10$0.00
10Qwen3.6 Max Preview51.834$2.92

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview200
2GPT-5 Codex200
3Qwen3.6 35B A3B200
4GPT-5.4 mini175
5GPT-5.1 Codex159
6GPT-5.4 nano157
7Qwen3.5 122B A10B156
8GPT-5.1153
9Gemini 3 Pro Preview143
10Gemini 3.1 Pro Preview135

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V4 Flash$0.175
3DeepSeek V3.2$0.315
4GPT-5.4 nano$0.463
5MiniMax-M2.7$0.525
6KAT Coder Pro V2$0.525
7MiniMax-M2.5$0.525
8GPT-5 mini$0.688
9Qwen3.5 27B$0.825
10Qwen3.6 35B A3B$0.844