The Inference Report

April 22, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, a jump from sixth place and a gain of 12.3 percentage points from its previous 53 on Artificial Analysis, though the two benchmarks measure different problem sets and methodologies so direct comparison requires caution. The tier below it remains tightly clustered: gpt-5.2-2025-12-11-medium scores 64.4%, GLM-5 and gpt-5.4-2026-03-05-medium both reach 62.8%, and GLM-5.1 sits at 62.7%, with Gemini 3.1 Pro Preview at 62.3%. What distinguishes these movements is the compression at the top, five models now occupy a 2.6-point band, and the significant repositioning of Chinese models: GLM-5 advanced from rank 13 to rank 3 (49.8 to 62.8), GLM-4.7 jumped from 38 to 14 (42.1 to 58.7), and Kimi K2.5 rose from 23 to 16 (46.8 to 58.5). Gemini 3.1 Pro Preview's descent from second to sixth, despite a 5.1-point absolute gain to 62.3%, underscores how the benchmark shifted the entire distribution upward rather than revealing a single model's failure. The SWE-rebench methodology appears to reward architectural or training choices that these frontier models now share more evenly, particularly for code completion and repository-level problem solving. Whether this convergence reflects genuine capability parity or benchmark saturation, where the test's difficulty ceiling has been approached by multiple labs, remains an open question that requires examining the test construction and error analysis across models, not just the leaderboard positions.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5GLM-5.162.7%
6Gemini 3.1 Pro Preview62.3%
7DeepSeek-V3.260.9%
8Claude Sonnet 4.660.7%
9Claude Sonnet 4.560.0%
10Qwen3.5-397B-A17B59.9%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1Claude Opus 4.757.362$10.00
2Gemini 3.1 Pro Preview57.2127$4.50
3GPT-5.456.882$5.63
4Kimi K2.653.9135$1.71
5GPT-5.3 Codex53.680$4.81
6Claude Opus 4.65353$10.00
7Muse Spark52.10$0.00
8Qwen3.6 Max Preview51.847$2.92
9Claude Sonnet 4.651.773$6.00
10GLM-5.151.444$2.15

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Qwen3.6 35B A3B242
2GPT-5 Codex214
3Gemini 3 Flash Preview195
4Grok 4.20 0309177
5GPT-5.1 Codex177
6Grok 4.20 0309 v2174
7GPT-5.4 mini174
8Qwen3.5 122B A10B159
9GPT-5.4 nano157
10Kimi K2.6135

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9Qwen3.6 35B A3B$0.844
10GLM-4.7$1.00