The Inference Report

March 10, 2026

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Code52.9%
2Junie52.1%
3Claude Opus 4.651.7%
4gpt-5.2-2025-12-11-xhigh51.7%
5gpt-5.2-2025-12-11-medium51.0%
6gpt-5.1-codex-max48.5%
7Claude Sonnet 4.547.1%
8Gemini 3 Pro Preview46.7%
9Gemini 3 Flash Preview46.7%
10gpt-5.2-codex45.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2110$4.50
2GPT-5.45778$5.63
3GPT-5.3 Codex5468$4.81
4Claude Opus 4.65355$10.00
5Claude Sonnet 4.651.769$6.00
6GPT-5.251.368$4.81
7GLM-549.850$1.55
8Claude Opus 4.549.760$10.00
9GPT-5.2 Codex4965$4.81
10Gemini 3 Pro Preview48.4116$4.50

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1GPT-5 Codex182
2Gemini 3 Flash Preview164
3Qwen3.5 122B A10B150
4GPT-5.1 Codex121
5MiMo-V2-Flash118
6Gemini 3 Pro Preview116
7Gemini 3.1 Pro Preview110
8GLM-4.7105
9GPT-5.198
10Qwen3.5 27B90

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3MiniMax-M2.5$0.525
4GPT-5 mini$0.688
5Qwen3.5 27B$0.825
6GLM-4.7$1.00
7Kimi K2 Thinking$1.07
8Qwen3.5 122B A10B$1.10
9Gemini 3 Flash Preview$1.13
10Kimi K2.5$1.20