The Inference Report

March 9, 2026

Claude Code holds the top position on SWE-rebench at 52.9%, unchanged from the previous cycle, while a new entrant, Junie, enters at 52.1% in second place. Claude Opus 4.6 dropped from 51.7% to a tie at 51.7% with gpt-5.2-2025-12-11-xhigh, though its ranking shifted down one position as the GPT variant pulled ahead in the secondary ordering. The most notable movement occurs lower in the rankings: Claude Opus 4.5 fell sharply from 49.7% on Artificial Analysis (position 8) to 43.8% on SWE-rebench (position 12), a 6-point gap that suggests either a real decline in code-solving ability or a meaningful divergence in what these two benchmarks measure. GLM-5 similarly dropped from 49.8% to 42.1%, and Kimi K2.5 fell from 46.8% to 37.9%, indicating that the Artificial Analysis benchmark may weight certain model capabilities differently than SWE-rebench does. Conversely, Kimi K2 Thinking gained 3 positions, rising from 40.9% to 43.8%, and GLM-4.6 climbed from 32.5% to 37.1%, suggesting these models may have received updates or that SWE-rebench captures their strengths more clearly. The top tier remains clustered between 52.9% and 51.0%, with no model breaking into the low 53% range, indicating a plateau in performance on this benchmark's task distribution rather than rapid improvement. Without access to the evaluation methodology details for either benchmark, the divergence between them warrants scrutiny: SWE-rebench appears to penalize certain architectures or reasoning approaches that Artificial Analysis credits, though both benchmarks are evaluating the same underlying problem space of software engineering tasks.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Code52.9%
2Junie52.1%
3Claude Opus 4.651.7%
4gpt-5.2-2025-12-11-xhigh51.7%
5gpt-5.2-2025-12-11-medium51.0%
6gpt-5.1-codex-max48.5%
7Claude Sonnet 4.547.1%
8Gemini 3 Pro Preview46.7%
9Gemini 3 Flash Preview46.7%
10gpt-5.2-codex45.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2120$4.50
2GPT-5.45773$5.63
3GPT-5.3 Codex5470$4.81
4Claude Opus 4.65357$10.00
5Claude Sonnet 4.651.769$6.00
6GPT-5.251.376$4.81
7GLM-549.852$1.55
8Claude Opus 4.549.764$10.00
9GPT-5.2 Codex4976$4.81
10Gemini 3 Pro Preview48.4116$4.50

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1GPT-5 Codex187
2Gemini 3 Flash Preview166
3GPT-5.1 Codex130
4Qwen3.5 122B A10B129
5Gemini 3.1 Pro Preview120
6Gemini 3 Pro Preview116
7MiMo-V2-Flash116
8GPT-5.1108
9GLM-4.7106
10Qwen3.5 27B91

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3MiniMax-M2.5$0.525
4GPT-5 mini$0.688
5Qwen3.5 27B$0.825
6GLM-4.7$1.00
7Kimi K2 Thinking$1.07
8Qwen3.5 122B A10B$1.10
9Gemini 3 Flash Preview$1.13
10Kimi K2.5$1.20