The Inference Report

April 1, 2026

Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, unchanged from the previous ranking cycle, while the middle tier has consolidated with modest gains across multiple models. Comparing the two benchmarks reveals a persistent divergence: SWE-rebench places Claude Opus 4.6 first by 12.3 points, but Artificial Analysis ranks it fourth at 53.0, with GPT-5.4 leading at 57.2. This gap reflects different evaluation methodologies or problem distributions. On SWE-rebench, Kimi K2.5 advanced from position 13 (58.5%) to position 13 (58.5%) with no reported change, though Kimi K2 Thinking climbed from rank 36 at 40.9 on Artificial Analysis to rank 17 at 57.4% on SWE-rebench, a 16.5-point jump suggesting the thinking variant's code-solving capability improved substantially between assessment cycles. GLM-5 similarly rose from rank 7 (49.8 on Artificial Analysis) to rank 3 (62.8% on SWE-rebench), a 13-point absolute gain. Gemini 3.1 Pro Preview dropped slightly from rank 2 to rank 5 on SWE-rebench (62.3%), though it remained competitive on Artificial Analysis at 57.2. The SWE-rebench benchmark itself appears to reward reasoning-capable models more heavily than Artificial Analysis does, since thinking-oriented variants like Kimi K2 Thinking show larger relative gains in the SWE-rebench ordering. Without access to the specific test cases or evaluation protocol differences between benchmarks, the directional pattern is clear: models with explicit reasoning steps perform better on SWE-rebench, while Artificial Analysis may weight inference speed or other factors more equally.

Cole Brennan

Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.

#ModelScore
1Claude Opus 4.665.3%
2gpt-5.2-2025-12-11-medium64.4%
3GLM-562.8%
4gpt-5.4-2026-03-05-medium62.8%
5Gemini 3.1 Pro Preview62.3%
6DeepSeek-V3.260.9%
7Claude Sonnet 4.660.7%
8Claude Sonnet 4.560.0%
9Qwen3.5-397B-A17B59.9%
10Step-3.5-Flash59.6%

Artificial Analysis composite index across coding, math, and reasoning benchmarks.

#ModelScoretok/s$/1M
1GPT-5.457.279$5.63
2Gemini 3.1 Pro Preview57.2117$4.50
3GPT-5.3 Codex5486$4.81
4Claude Opus 4.65358$10.00
5Claude Sonnet 4.651.761$6.00
6GPT-5.251.371$4.81
7GLM-549.866$1.55
8Claude Opus 4.549.759$10.00
9MiniMax-M2.749.645$0.525
10MiMo-V2-Pro49.20$1.50

Output tokens per second — higher is faster. Minimum intelligence score of 40.

#Modeltok/s
1Grok 4.20 Beta 0309237
2GPT-5 Codex207
3Gemini 3 Flash Preview192
4GPT-5.4 mini186
5GPT-5.4 nano185
6Qwen3.5 122B A10B145
7GPT-5.1 Codex138
8MiMo-V2-Flash123
9Gemini 3 Pro Preview119
10GPT-5.2 Codex118

Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.

#Model$/1M
1MiMo-V2-Flash$0.15
2DeepSeek V3.2$0.315
3GPT-5.4 nano$0.463
4MiniMax-M2.7$0.525
5KAT Coder Pro V2$0.525
6MiniMax-M2.5$0.525
7GPT-5 mini$0.688
8Qwen3.5 27B$0.825
9GLM-4.7$1.00
10Kimi K2 Thinking$1.07