The Inference Report

May 8, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3 percent, unchanged from the previous cycle, while the tier below it has compressed significantly: gpt-5.2-2025-12-11-medium scores 64.4 percent, and a cluster of three models (GLM-5, Junie, and gpt-5.4-2026-03-05-medium) all sit at 62.8 percent. The most striking movement comes from GLM-5, which jumped from rank 17 at 49.8 percent on Artificial Analysis to rank 3 at 62.8 percent on SWE-rebench, a 13-point gain that suggests either a genuine capability improvement or a methodological difference between the two benchmarks worth examining. GLM-5.1 shows a similar trajectory, moving from rank 14 at 51.4 percent to rank 6 at 62.7 percent, and Kimi K2 Thinking climbed from rank 54 at 40.9 percent to rank 21 at 57.4 percent. Gemini 3.1 Pro Preview, by contrast, declined from rank 3 at 57.2 percent on Artificial Analysis to rank 7 at 62.3 percent on SWE-rebench, a modest slip that places it below several newer contenders despite holding strong absolute performance. The Artificial Analysis leaderboard shows GPT-5.5 leading at 60.2 percent with Claude Opus 4.7 at 57.3 percent and Gemini 3.1 Pro Preview at 57.2 percent, a different ordering entirely from SWE-rebench. This divergence between benchmarks raises a methodological question: SWE-rebench appears to reward certain architectural or training choices that Artificial Analysis does not, and without clarity on what each benchmark isolates, ranking movements alone cannot confirm whether progress is real or an artifact of evaluation design.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4Junie62.8%
5gpt-5.4-2026-03-05-medium62.8%
6GLM-5.162.7%
7Gemini 3.1 Pro Preview62.3%
8DeepSeek-V3.260.9%
9Claude Sonnet 4.660.7%
10Claude Sonnet 4.560.0%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.560.275$11.25
2Claude Opus 4.757.352$10.94
3Gemini 3.1 Pro Preview57.2125$4.50
4GPT-5.456.878$5.63
5Kimi K2.653.938$1.71
6MiMo-V2.5-Pro53.862$1.50
7GPT-5.3 Codex53.679$4.81
8Grok 4.353.280$1.56
9Claude Opus 4.65350$10.94
10Muse Spark52.10$0.00

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview198
2Qwen3.6 35B A3B188
3GPT-5.1 Codex177
4GPT-5.4 mini170
5GPT-5 Codex165
6GPT-5.4 nano156
7Qwen3.5 122B A10B152
8GPT-5.1140
9MiMo-V2-Flash139
10Gemini 3 Pro Preview128

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V4 Flash$0.175
3DeepSeek V3.2$0.337
4GPT-5.4 nano$0.463
5MiniMax-M2.7$0.525
6KAT Coder Pro V2$0.525
7MiniMax-M2.5$0.525
8Qwen3.6 35B A3B$0.557
9GPT-5 mini$0.688
10MiMo-V2.5$0.72