The Inference Report

March 11, 2026

The SWE-rebench leaderboard shows no movement at the top tier: Claude Code, Junie, and the gpt-5.2 variants hold positions 1 through 5 with scores between 52.9% and 51.0%, unchanged from the prior cycle. Below that band, however, several models experienced substantial rank shifts that expose inconsistencies between the two evaluation sources. Claude Opus 4.5 dropped from position 8 (49.7 on Artificial Analysis) to position 12 (43.8% on SWE-rebench), a gap of 6 percentage points that suggests either a significant performance regression or a methodological divergence between benchmarks. Kimi K2 Thinking climbed 14 positions on SWE-rebench (from 27 to 13) despite logging only a 2.9-point gain (40.9 to 43.8%), while GLM-5 fell from position 7 (49.8 on Artificial Analysis) to position 15 (42.1% on SWE-rebench), a 7.7-point drop. Kimi K2.5 reversed course entirely, sinking from position 12 (46.8 on Artificial Analysis) to position 19 (37.9% on SWE-rebench). These divergences raise questions about benchmark stability: SWE-rebench appears to reward certain architectural choices or fine-tuning strategies that Artificial Analysis does not, yet neither source clarifies whether the gap reflects genuine capability differences or evaluation artifacts. The lack of detailed methodology documentation for either benchmark makes it difficult to assess whether these swings represent real performance variation or measurement drift. At the frontier, the stability of the top five models suggests that the highest-capability systems may be approaching a plateau on this task distribution, while the volatility in the 7 to 20 ranking band indicates that mid-tier models remain sensitive to benchmark design choices.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Code52.9%
2Junie52.1%
3Claude Opus 4.651.7%
4gpt-5.2-2025-12-11-xhigh51.7%
5gpt-5.2-2025-12-11-medium51.0%
6gpt-5.1-codex-max48.5%
7Claude Sonnet 4.547.1%
8Gemini 3 Pro Preview46.7%
9Gemini 3 Flash Preview46.7%
10gpt-5.2-codex45.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2106$4.50
2GPT-5.45778$5.63
3GPT-5.3 Codex5465$4.81
4Claude Opus 4.65355$10.00
5Claude Sonnet 4.651.769$6.00
6GPT-5.251.367$4.81
7GLM-549.858$1.55
8Claude Opus 4.549.762$10.00
9GPT-5.2 Codex4973$4.81
10Gemini 3 Pro Preview48.4115$4.50

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1GPT-5 Codex186
2Gemini 3 Flash Preview166
3Qwen3.5 122B A10B154
4MiMo-V2-Flash136
5Gemini 3 Pro Preview115
6GPT-5.1 Codex114
7Gemini 3.1 Pro Preview106
8GLM-4.790
9Qwen3.5 27B89
10GPT-5.179

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3MiniMax-M2.5$0.525
4GPT-5 mini$0.688
5Qwen3.5 27B$0.825
6GLM-4.7$1.00
7Kimi K2 Thinking$1.07
8Qwen3.5 122B A10B$1.10
9Gemini 3 Flash Preview$1.13
10Kimi K2.5$1.20