Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, unchanged from the previous ranking cycle, while the middle tier has consolidated with modest gains across multiple models. Comparing the two benchmarks reveals a persistent divergence: SWE-rebench places Claude Opus 4.6 first by 12.3 points, but Artificial Analysis ranks it fourth at 53.0, with GPT-5.4 leading at 57.2. This gap reflects different evaluation methodologies or problem distributions. On SWE-rebench, Kimi K2.5 advanced from position 13 (58.5%) to position 13 (58.5%) with no reported change, though Kimi K2 Thinking climbed from rank 36 at 40.9 on Artificial Analysis to rank 17 at 57.4% on SWE-rebench, a 16.5-point jump suggesting the thinking variant's code-solving capability improved substantially between assessment cycles. GLM-5 similarly rose from rank 7 (49.8 on Artificial Analysis) to rank 3 (62.8% on SWE-rebench), a 13-point absolute gain. Gemini 3.1 Pro Preview dropped slightly from rank 2 to rank 5 on SWE-rebench (62.3%), though it remained competitive on Artificial Analysis at 57.2. The SWE-rebench benchmark itself appears to reward reasoning-capable models more heavily than Artificial Analysis does, since thinking-oriented variants like Kimi K2 Thinking show larger relative gains in the SWE-rebench ordering. Without access to the specific test cases or evaluation protocol differences between benchmarks, the directional pattern is clear: models with explicit reasoning steps perform better on SWE-rebench, while Artificial Analysis may weight inference speed or other factors more equally.
Cole Brennan
Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.
| # | Model | Score |
|---|---|---|
| 1 | Claude Opus 4.6 | 65.3% |
| 2 | gpt-5.2-2025-12-11-medium | 64.4% |
| 3 | GLM-5 | 62.8% |
| 4 | gpt-5.4-2026-03-05-medium | 62.8% |
| 5 | Gemini 3.1 Pro Preview | 62.3% |
| 6 | DeepSeek-V3.2 | 60.9% |
| 7 | Claude Sonnet 4.6 | 60.7% |
| 8 | Claude Sonnet 4.5 | 60.0% |
| 9 | Qwen3.5-397B-A17B | 59.9% |
| 10 | Step-3.5-Flash | 59.6% |
Artificial Analysis composite index across coding, math, and reasoning benchmarks.
| # | Model | Score | tok/s | $/1M |
|---|---|---|---|---|
| 1 | GPT-5.4 | 57.2 | 79 | $5.63 |
| 2 | Gemini 3.1 Pro Preview | 57.2 | 117 | $4.50 |
| 3 | GPT-5.3 Codex | 54 | 86 | $4.81 |
| 4 | Claude Opus 4.6 | 53 | 58 | $10.00 |
| 5 | Claude Sonnet 4.6 | 51.7 | 61 | $6.00 |
| 6 | GPT-5.2 | 51.3 | 71 | $4.81 |
| 7 | GLM-5 | 49.8 | 66 | $1.55 |
| 8 | Claude Opus 4.5 | 49.7 | 59 | $10.00 |
| 9 | MiniMax-M2.7 | 49.6 | 45 | $0.525 |
| 10 | MiMo-V2-Pro | 49.2 | 0 | $1.50 |
Output tokens per second — higher is faster. Minimum intelligence score of 40.
| # | Model | tok/s |
|---|---|---|
| 1 | Grok 4.20 Beta 0309 | 237 |
| 2 | GPT-5 Codex | 207 |
| 3 | Gemini 3 Flash Preview | 192 |
| 4 | GPT-5.4 mini | 186 |
| 5 | GPT-5.4 nano | 185 |
| 6 | Qwen3.5 122B A10B | 145 |
| 7 | GPT-5.1 Codex | 138 |
| 8 | MiMo-V2-Flash | 123 |
| 9 | Gemini 3 Pro Preview | 119 |
| 10 | GPT-5.2 Codex | 118 |
Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.
| # | Model | $/1M |
|---|---|---|
| 1 | MiMo-V2-Flash | $0.15 |
| 2 | DeepSeek V3.2 | $0.315 |
| 3 | GPT-5.4 nano | $0.463 |
| 4 | MiniMax-M2.7 | $0.525 |
| 5 | KAT Coder Pro V2 | $0.525 |
| 6 | MiniMax-M2.5 | $0.525 |
| 7 | GPT-5 mini | $0.688 |
| 8 | Qwen3.5 27B | $0.825 |
| 9 | GLM-4.7 | $1.00 |
| 10 | Kimi K2 Thinking | $1.07 |