The Inference Report

May 5, 2026

Claude Opus 4.6 holds the SWE-rebench top position at 65.3%, up from 53 on Artificial Analysis, a 12.3-point gain that reflects measurable improvement in solving real repository issues. The benchmark itself shows tighter clustering in the elite tier: gpt-5.2-2025-12-11-medium (64.4%), GLM-5 (62.8%), and Junie (62.8%) occupy positions 2 through 4, with only 1.6 percentage points separating them, whereas previously these models showed more dispersed rankings. Gemini 3.1 Pro Preview dropped from position 3 on Artificial Analysis to position 7 on SWE-rebench (57.2 to 62.3%), a counterintuitive movement that suggests the benchmarks measure different failure modes: Artificial Analysis may weight certain prompt-following or reasoning tasks more heavily, while SWE-rebench's repository-level problem-solving involves integrating code across files, managing context, and validating against actual test suites. The larger reorganization appears in the mid-tier: GLM-5 jumped from position 17 to position 3, Kimi K2.5 climbed from 29 to 16, and GLM-4.7 advanced from 44 to 14, gains of 14, 13, and 30 positions respectively, indicating these models have specific strengths in multi-step code repair that Artificial Analysis does not capture as prominently. The divergence between the two benchmarks is methodological: SWE-rebench tests models on unmodified real pull requests and requires executable validation, whereas Artificial Analysis appears to use a broader evaluation framework that may include instruction-following or synthetic tasks. Within SWE-rebench alone, the top 10 shows no movement, suggesting the highest-performing models have plateaued relative to each other, while the reshuffling below position 15 reflects genuine capability differences rather than noise.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4Junie62.8%
5gpt-5.4-2026-03-05-medium62.8%
6GLM-5.162.7%
7Gemini 3.1 Pro Preview62.3%
8DeepSeek-V3.260.9%
9Claude Sonnet 4.660.7%
10Claude Sonnet 4.560.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.560.274$11.25
2Claude Opus 4.757.358$10.94
3Gemini 3.1 Pro Preview57.2130$4.50
4GPT-5.456.886$5.63
5Kimi K2.653.930$1.71
6MiMo-V2.5-Pro53.864$1.50
7GPT-5.3 Codex53.687$4.81
8Grok 4.353.2108$1.56
9Claude Opus 4.65348$10.94
10Muse Spark52.10$0.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview199
2Qwen3.6 35B A3B194
3GPT-5 Codex187
4GPT-5.1 Codex182
5GPT-5.4 mini177
6GPT-5.4 nano164
7Qwen3.5 122B A10B157
8GPT-5.1153
9MiMo-V2-Flash150
10MiMo-V2-Omni-0327139

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V4 Flash$0.175
3DeepSeek V3.2$0.337
4GPT-5.4 nano$0.463
5MiniMax-M2.7$0.525
6KAT Coder Pro V2$0.525
7MiniMax-M2.5$0.525
8Qwen3.6 35B A3B$0.557
9GPT-5 mini$0.688
10MiMo-V2.5$0.80