The Inference Report

March 16, 2026

The SWE-rebench rankings show no movement at the top tier, with Claude Code, Junie, and Claude Opus 4.6 holding their positions at 52.9%, 52.1%, and 51.7% respectively, while the Artificial Analysis benchmark reveals more volatility across its 323-model leaderboard. Claude Opus 4.5 dropped significantly on SWE-rebench from a reported 49.7 on Artificial Analysis to 43.8%, falling from position 8 to 12, a 5.9-point decline that suggests either a methodology shift between benchmarks or performance degradation under the SWE-rebench evaluation protocol. Kimi K2 Thinking climbed 15 positions on Artificial Analysis (from 40.9 to 43.8 on SWE-rebench, now ranking 13th), while GLM-5 fell 7 positions despite maintaining reasonable scores, dropping from 49.8 on Artificial Analysis to 42.1 on SWE-rebench. The divergence between Gemini 3 Pro Preview's performance on the two benchmarks is notable: it scores 48.4 on Artificial Analysis but only 46.7 on SWE-rebench, ranking 11th versus 8th respectively, suggesting the benchmarks weight different aspects of coding capability or test different problem distributions. Mid-tier models show the most churn: Kimi K2.5 dropped 8 positions on SWE-rebench (37.9%) despite scoring 46.8 on Artificial Analysis, and GLM-4.6 climbed from position 53 to 22 on Artificial Analysis but remains at 37.1% on SWE-rebench at position 22, indicating the benchmarks are not measuring identical capabilities. The absence of clear correlation between SWE-rebench and Artificial Analysis rankings across the full dataset suggests these evaluations are testing distinct problem spaces or that the SWE-rebench protocol imposes constraints (likely around repository-level code generation and integration) that don't map cleanly to general coding performance.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Code52.9%
2Junie52.1%
3Claude Opus 4.651.7%
4gpt-5.2-2025-12-11-xhigh51.7%
5gpt-5.2-2025-12-11-medium51.0%
6gpt-5.1-codex-max48.5%
7Claude Sonnet 4.547.1%
8Gemini 3 Pro Preview46.7%
9Gemini 3 Flash Preview46.7%
10gpt-5.2-codex45.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2121$4.50
2GPT-5.45780$5.63
3GPT-5.3 Codex5470$4.81
4Claude Opus 4.65362$10.00
5Claude Sonnet 4.651.769$6.00
6GPT-5.251.377$4.81
7GLM-549.868$1.55
8Claude Opus 4.549.772$10.00
9GPT-5.2 Codex4992$4.81
10Grok 4.20 Beta 030948.5264$3.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Grok 4.20 Beta 0309264
2GPT-5 Codex207
3Gemini 3 Flash Preview183
4Qwen3.5 122B A10B157
5GPT-5.1 Codex135
6MiMo-V2-Flash129
7Gemini 3.1 Pro Preview121
8Gemini 3 Pro Preview118
9GPT-5.1117
10Kimi K2 Thinking99

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3MiniMax-M2.5$0.525
4GPT-5 mini$0.688
5Qwen3.5 27B$0.825
6GLM-4.7$1.00
7Kimi K2 Thinking$1.07
8Qwen3.5 122B A10B$1.10
9Gemini 3 Flash Preview$1.13
10Kimi K2.5$1.20