The Inference Report

April 26, 2026

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5GLM-5.162.7%
6Gemini 3.1 Pro Preview62.3%
7DeepSeek-V3.260.9%
8Claude Sonnet 4.660.7%
9Claude Sonnet 4.560.0%
10Qwen3.5-397B-A17B59.9%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.560.2101$11.25
2Claude Opus 4.757.364$10.00
3Gemini 3.1 Pro Preview57.2135$4.50
4GPT-5.456.883$5.63
5Kimi K2.653.9108$1.71
6MiMo-V2.5-Pro53.867$1.50
7GPT-5.3 Codex53.686$4.81
8Claude Opus 4.65360$10.00
9Muse Spark52.10$0.00
10Qwen3.6 Max Preview51.834$2.92

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview201
2Qwen3.6 35B A3B200
3GPT-5 Codex199
4GPT-5.1 Codex179
5GPT-5.4 mini176
6Qwen3.5 122B A10B154
7GPT-5.4 nano149
8GPT-5.1136
9Gemini 3.1 Pro Preview135
10Gemini 3 Pro Preview134

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V4 Flash$0.175
3DeepSeek V3.2$0.315
4GPT-5.4 nano$0.463
5MiniMax-M2.7$0.525
6KAT Coder Pro V2$0.525
7MiniMax-M2.5$0.525
8GPT-5 mini$0.688
9Qwen3.5 27B$0.825
10Qwen3.6 35B A3B$0.844