The Inference Report

April 15, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, a 12.3-point jump from its previous ranking of 53 on Artificial Analysis, while Gemini 3.1 Pro Preview has dropped from first place (57.2 on Artificial Analysis) to fifth on SWE-rebench at 62.3%, suggesting the two benchmarks measure different aspects of code generation or that SWE-rebench's evaluation methodology surfaces different model capabilities than Artificial Analysis does. GLM-5 and Kimi K2.5 both show substantial gains on SWE-rebench relative to their Artificial Analysis scores, rising to third (62.8% versus 49.8%) and thirteenth (58.5% versus 46.8%) respectively, while several models near the top of SWE-rebench appear lower on Artificial Analysis, indicating either a shift in what tasks dominate the coding benchmark or differences in how the two evaluations weight problem difficulty and solution quality. The SWE-rebench methodology itself remains opaque from the data provided: the scoring scale differs from Artificial Analysis, the test set composition is unspecified, and whether improvements reflect genuine capability gains or benchmark-specific optimization cannot be determined from ranking movement alone. What is clear is that SWE-rebench produces a substantially different ordering among frontier models, which matters if teams are using it to guide development priorities, but without documentation of the benchmark's task distribution, evaluation harness, and baseline stability, the practical significance of these shifts remains ambiguous.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2122$4.50
2GPT-5.456.879$5.63
3GPT-5.3 Codex53.665$4.81
4Claude Opus 4.65343$10.00
5Muse Spark52.10$0.00
6Claude Sonnet 4.651.752$6.00
7GLM-5.151.446$2.15
8GPT-5.251.362$4.81
9Qwen3.6 Plus5052$1.13
10GLM-549.866$1.55

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview173
2GPT-5.4 nano162
3GPT-5.4 mini161
4GPT-5 Codex161
5GPT-5.1 Codex155
6Grok 4.20 0309 v2151
7Grok 4.20 0309151
8Qwen3.5 122B A10B133
9Gemini 3 Pro Preview127
10Gemini 3.1 Pro Preview122

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9GLM-4.7$1.00
10Kimi K2 Thinking$1.07