The Inference Report

April 14, 2026

Claude Opus 4.6 holds the SWE-rebench lead at 65.3%, a 12.3-point gain from its prior position of 53 on Artificial Analysis, while Gemini 3.1 Pro Preview dropped from first place on Artificial Analysis (57.2) to fifth on SWE-rebench (62.3), suggesting the two benchmarks measure different aspects of code generation capability or apply distinct evaluation criteria. GLM-5 and Kimi K2.5 both climbed sharply on SWE-rebench relative to their Artificial Analysis rankings, with GLM-5 jumping from 49.8 to 62.8 and Kimi K2.5 from 46.8 to 58.5, indicating either improved performance on repository-level tasks or a methodology that rewards certain architectural choices more than point-estimate evaluations do. The top five models on SWE-rebench (Claude Opus 4.6, gpt-5.2-2025-12-11-medium, GLM-5, gpt-5.4-2026-03-05-medium, and Gemini 3.1 Pro Preview) all score between 62.3 and 65.3 percent, a compressed band that contrasts with wider spreads in earlier rankings, though without visibility into SWE-rebench's task composition, evaluation protocol, or error bars, it remains unclear whether these clustering differences reflect genuine convergence in code-solving ability or artifacts of test set size and difficulty distribution. The Artificial Analysis list shows no new entries at position 310 (Llama 2 Chat 13B at 8.4), a minor data inconsistency that does not affect the overall ranking coherence. Both benchmarks show deep-model dominance, but the divergence between them in how frontier models rank underscores that code generation performance is not monolithic: repository-level problem-solving, as measured by SWE-rebench, appears to reward different capabilities than the aggregated tasks tracked by Artificial Analysis.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Gemini 3.1 Pro Preview57.2132$4.50
2GPT-5.456.879$5.63
3GPT-5.3 Codex53.681$4.81
4Claude Opus 4.65347$10.00
5Muse Spark52.10$0.00
6Claude Sonnet 4.651.759$6.00
7GLM-5.151.453$2.15
8GPT-5.251.375$4.81
9Qwen3.6 Plus5048$1.13
10GLM-549.883$1.55

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview189
2GPT-5.4 mini179
3GPT-5.4 nano173
4Grok 4.20 0309165
5Grok 4.20 0309 v2163
6GPT-5 Codex162
7GPT-5.1 Codex158
8Gemini 3 Pro Preview134
9Gemini 3.1 Pro Preview132
10Qwen3.5 122B A10B128

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9GLM-4.7$1.00
10Kimi K2 Thinking$1.07