The Inference Report

March 13, 2026

Claude Code holds the top position on SWE-rebench at 52.9%, with Junie at 52.1% and Claude Opus 4.6 at 51.7%, but the most striking pattern appears in the Artificial Analysis benchmark where the rankings have shifted substantially since the prior update: Claude Opus 4.5 dropped from position 8 (49.7) to position 12 (43.8), GLM-5 fell from position 7 (49.8) to position 15 (42.1), and Kimi K2.5 plummeted from position 13 (46.8) to position 19 (37.9), while Kimi K2 Thinking climbed from position 28 (40.9) to position 13 (43.8) and GLM-4.6 rose from position 54 (32.5) to position 22 (37.1). DeepSeek V3.2 Speciale dropped sharply from position 46 (34.1) to position 66 (29.4), the most dramatic reversal in the upper tier. The scale of these movements raises questions about benchmark stability: shifts of 5 to 10 percentage points across a single update are large enough to suggest either meaningful model degradation, evaluation methodology changes, or dataset variance rather than genuine performance evolution. Kimi K2 Thinking's 3-position rise paired with K2.5's 6-position fall is particularly difficult to interpret without visibility into what changed in the evaluation protocol. On SWE-rebench, the top tier remains essentially frozen, which either reflects genuine consolidation at the capability ceiling or indicates that benchmark has reached saturation where further discrimination is difficult. The Artificial Analysis benchmark, by contrast, shows volatility that demands scrutiny: if these are the same models tested under consistent conditions, such reversals warrant investigation into whether scoring criteria, test set composition, or model access states have shifted between runs.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Code52.9%
2Junie52.1%
3Claude Opus 4.651.7%
4gpt-5.2-2025-12-11-xhigh51.7%
5gpt-5.2-2025-12-11-medium51.0%
6gpt-5.1-codex-max48.5%
7Claude Sonnet 4.547.1%
8Gemini 3 Pro Preview46.7%
9Gemini 3 Flash Preview46.7%
10gpt-5.2-codex45.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2116$4.50
2GPT-5.45783$5.63
3GPT-5.3 Codex5461$4.81
4Claude Opus 4.65355$10.00
5Claude Sonnet 4.651.761$6.00
6GPT-5.251.361$4.81
7GLM-549.863$1.55
8Claude Opus 4.549.761$10.00
9GPT-5.2 Codex4980$4.81
10Grok 4.20 Beta 030948.5251$3.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Grok 4.20 Beta 0309251
2GPT-5 Codex178
3Gemini 3 Flash Preview168
4Qwen3.5 122B A10B149
5MiMo-V2-Flash128
6Gemini 3 Pro Preview125
7Gemini 3.1 Pro Preview116
8GPT-5.1 Codex107
9Qwen3.5 27B88
10GPT-5.483

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3MiniMax-M2.5$0.525
4GPT-5 mini$0.688
5Qwen3.5 27B$0.825
6GLM-4.7$1.00
7Kimi K2 Thinking$1.07
8Qwen3.5 122B A10B$1.10
9Gemini 3 Flash Preview$1.13
10Kimi K2.5$1.20