The Inference Report

April 20, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, a 12.3-point jump from its previous ranking of 53% on Artificial Analysis, though the two benchmarks measure different problem sets and cannot be directly compared. The top six models on SWE-rebench cluster between 62.3% and 65.3%, with gpt-5.2-2025-12-11-medium at 64.4%, GLM-5 and gpt-5.4-2026-03-05-medium tied at 62.8%, GLM-5.1 at 62.7%, and Gemini 3.1 Pro Preview at 62.3%, suggesting convergence in code completion capability at the frontier. Gemini 3.1 Pro Preview dropped from second place on Artificial Analysis (57.2) to sixth on SWE-rebench (62.3), indicating the benchmarks reward different model properties, with SWE-rebench apparently favoring systems trained or optimized specifically for repository-level code repair. GLM-4.7 advanced from rank 36 on Artificial Analysis (42.1) to rank 14 on SWE-rebench (58.7), a 16.6-point gain, while Kimi K2.5 climbed from rank 21 (46.8) to rank 16 (58.5), and Kimi K2 Thinking jumped from rank 44 (40.9) to rank 21 (57.4), suggesting these models contain architectural or training choices that translate effectively to the SWE-rebench evaluation methodology. The Artificial Analysis leaderboard shows no movement in the top 30 positions relative to the previous snapshot, indicating ranking stability at the frontier when measured on that benchmark. SWE-rebench's methodology, evaluating models on real pull requests and repository contexts, appears more sensitive to model-specific optimizations than Artificial Analysis, which may employ broader capability assessment; without detailed documentation of task overlap or divergence, claims about which benchmark better predicts real-world code repair performance remain speculative.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5GLM-5.162.7%
6Gemini 3.1 Pro Preview62.3%
7DeepSeek-V3.260.9%
8Claude Sonnet 4.660.7%
9Claude Sonnet 4.560.0%
10Qwen3.5-397B-A17B59.9%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Claude Opus 4.757.353$10.00
2Gemini 3.1 Pro Preview57.2132$4.50
3GPT-5.456.885$5.63
4GPT-5.3 Codex53.693$4.81
5Claude Opus 4.65359$10.00
6Muse Spark52.10$0.00
7Qwen3.6 Max Preview51.80$0.00
8Claude Sonnet 4.651.775$6.00
9GLM-5.151.445$2.15
10GPT-5.251.384$4.81

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Qwen3.6 35B A3B242
2Grok 4.20 0309 v2228
3Grok 4.20 0309220
4GPT-5 Codex211
5Gemini 3 Flash Preview197
6GPT-5.1 Codex196
7GPT-5.4 mini189
8GPT-5.4 nano165
9Qwen3.5 122B A10B164
10Gemini 3 Pro Preview137

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9Qwen3.6 35B A3B$0.844
10GLM-4.7$1.00