The Inference Report

May 6, 2026

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4Junie62.8%
5gpt-5.4-2026-03-05-medium62.8%
6GLM-5.162.7%
7Gemini 3.1 Pro Preview62.3%
8DeepSeek-V3.260.9%
9Claude Sonnet 4.660.7%
10Claude Sonnet 4.560.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.560.279$11.25
2Claude Opus 4.757.364$10.94
3Gemini 3.1 Pro Preview57.2138$4.50
4GPT-5.456.885$5.63
5Kimi K2.653.928$1.71
6MiMo-V2.5-Pro53.864$1.50
7GPT-5.3 Codex53.687$4.81
8Grok 4.353.2101$1.56
9Claude Opus 4.65357$10.94
10Muse Spark52.10$0.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview232
2GPT-5.1 Codex195
3Qwen3.6 35B A3B189
4GPT-5.4 mini178
5GPT-5 Codex175
6GPT-5.4 nano163
7GPT-5.1153
8Qwen3.5 122B A10B152
9MiMo-V2-Flash148
10MiMo-V2-Omni141

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V4 Flash$0.175
3DeepSeek V3.2$0.337
4GPT-5.4 nano$0.463
5MiniMax-M2.7$0.525
6KAT Coder Pro V2$0.525
7MiniMax-M2.5$0.525
8Qwen3.6 35B A3B$0.557
9GPT-5 mini$0.688
10MiMo-V2.5$0.80