The Inference Report

April 29, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, unchanged from the previous cycle, while the tier immediately below has compressed considerably. GPT-5.2-2025-12-11-medium sits at 64.4%, with GLM-5, GPT-5.4-2026-03-05-medium, GLM-5.1, and Gemini 3.1 Pro Preview clustered between 62.3 and 62.8 percent, suggesting a plateau where further gains require either methodological refinement or fundamentally different approaches to coding tasks. The notable movers are Claude Sonnet 4.6, which jumped from position 11 to 8 with a 9-point gain from 51.7 to 60.7 percent, and GLM-4.7, which climbed from position 43 to 14 by improving 16.6 points from 42.1 to 58.7 percent. Gemini 3.1 Pro Preview declined from position 3 to 6, dropping 4.9 points from 57.2 to 62.3 percent on SWE-rebench despite maintaining strong performance on Artificial Analysis. The divergence between these two benchmarks is instructive: Claude Opus 4.6 scores 65.3 on SWE-rebench but only 53 on Artificial Analysis, a 12.3-point gap, while Gemini 3.1 Pro Preview shows narrower separation at 62.3 versus 57.2, suggesting the benchmarks weight different problem classes or that SWE-rebench's evaluation criteria reward the specific capabilities Claude's latest iteration emphasizes. Without visibility into whether SWE-rebench's test set, scoring rubric, or evaluation harness changed, it remains unclear whether these movements reflect genuine capability shifts or benchmark sensitivity to model-specific strengths.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5GLM-5.162.7%
6Gemini 3.1 Pro Preview62.3%
7DeepSeek-V3.260.9%
8Claude Sonnet 4.660.7%
9Claude Sonnet 4.560.0%
10Qwen3.5-397B-A17B59.9%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.560.268$11.25
2Claude Opus 4.757.356$10.00
3Gemini 3.1 Pro Preview57.2133$4.50
4GPT-5.456.890$5.63
5Kimi K2.653.90$1.71
6MiMo-V2.5-Pro53.865$1.50
7GPT-5.3 Codex53.696$4.81
8Claude Opus 4.65357$10.00
9Muse Spark52.10$0.00
10Qwen3.6 Max Preview51.833$2.92

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Gemini 3 Flash Preview205
2Qwen3.6 35B A3B199
3GPT-5 Codex192
4GPT-5.1 Codex183
5GPT-5.4 mini174
6GPT-5.4 nano160
7Qwen3.5 122B A10B145
8Gemini 3 Pro Preview143
9GPT-5.1142
10Gemini 3.1 Pro Preview133

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V4 Flash$0.175
3DeepSeek V3.2$0.315
4GPT-5.4 nano$0.463
5MiniMax-M2.7$0.525
6KAT Coder Pro V2$0.525
7MiniMax-M2.5$0.525
8Qwen3.6 35B A3B$0.557
9GPT-5 mini$0.688
10Qwen3.5 27B$0.825