Claude Opus 4.6 holds the SWE-rebench top position at 65.3%, up from 53 on Artificial Analysis, a 12.3-point gain that reflects measurable improvement in solving real repository issues. The benchmark itself shows tighter clustering in the elite tier: gpt-5.2-2025-12-11-medium (64.4%), GLM-5 (62.8%), and Junie (62.8%) occupy positions 2 through 4, with only 1.6 percentage points separating them, whereas previously these models showed more dispersed rankings. Gemini 3.1 Pro Preview dropped from position 3 on Artificial Analysis to position 7 on SWE-rebench (57.2 to 62.3%), a counterintuitive movement that suggests the benchmarks measure different failure modes: Artificial Analysis may weight certain prompt-following or reasoning tasks more heavily, while SWE-rebench's repository-level problem-solving involves integrating code across files, managing context, and validating against actual test suites. The larger reorganization appears in the mid-tier: GLM-5 jumped from position 17 to position 3, Kimi K2.5 climbed from 29 to 16, and GLM-4.7 advanced from 44 to 14, gains of 14, 13, and 30 positions respectively, indicating these models have specific strengths in multi-step code repair that Artificial Analysis does not capture as prominently. The divergence between the two benchmarks is methodological: SWE-rebench tests models on unmodified real pull requests and requires executable validation, whereas Artificial Analysis appears to use a broader evaluation framework that may include instruction-following or synthetic tasks. Within SWE-rebench alone, the top 10 shows no movement, suggesting the highest-performing models have plateaued relative to each other, while the reshuffling below position 15 reflects genuine capability differences rather than noise.
Cole Brennan
Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.
| # | Model | Score |
|---|---|---|
| 1 | Claude Opus 4.6 | 65.3% |
| 2 | gpt-5.2-2025-12-11-medium | 64.4% |
| 3 | GLM-5 | 62.8% |
| 4 | Junie | 62.8% |
| 5 | gpt-5.4-2026-03-05-medium | 62.8% |
| 6 | GLM-5.1 | 62.7% |
| 7 | Gemini 3.1 Pro Preview | 62.3% |
| 8 | DeepSeek-V3.2 | 60.9% |
| 9 | Claude Sonnet 4.6 | 60.7% |
| 10 | Claude Sonnet 4.5 | 60.0% |
Artificial Analysis composite index across coding, math, and reasoning benchmarks.
| # | Model | Score | tok/s | $/1M |
|---|---|---|---|---|
| 1 | GPT-5.5 | 60.2 | 74 | $11.25 |
| 2 | Claude Opus 4.7 | 57.3 | 58 | $10.94 |
| 3 | Gemini 3.1 Pro Preview | 57.2 | 130 | $4.50 |
| 4 | GPT-5.4 | 56.8 | 86 | $5.63 |
| 5 | Kimi K2.6 | 53.9 | 30 | $1.71 |
| 6 | MiMo-V2.5-Pro | 53.8 | 64 | $1.50 |
| 7 | GPT-5.3 Codex | 53.6 | 87 | $4.81 |
| 8 | Grok 4.3 | 53.2 | 108 | $1.56 |
| 9 | Claude Opus 4.6 | 53 | 48 | $10.94 |
| 10 | Muse Spark | 52.1 | 0 | $0.00 |
Output tokens per second — higher is faster. Minimum intelligence score of 40.
| # | Model | tok/s |
|---|---|---|
| 1 | Gemini 3 Flash Preview | 199 |
| 2 | Qwen3.6 35B A3B | 194 |
| 3 | GPT-5 Codex | 187 |
| 4 | GPT-5.1 Codex | 182 |
| 5 | GPT-5.4 mini | 177 |
| 6 | GPT-5.4 nano | 164 |
| 7 | Qwen3.5 122B A10B | 157 |
| 8 | GPT-5.1 | 153 |
| 9 | MiMo-V2-Flash | 150 |
| 10 | MiMo-V2-Omni-0327 | 139 |
Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.
| # | Model | $/1M |
|---|---|---|
| 1 | MiMo-V2-Flash | $0.15 |
| 2 | DeepSeek V4 Flash | $0.175 |
| 3 | DeepSeek V3.2 | $0.337 |
| 4 | GPT-5.4 nano | $0.463 |
| 5 | MiniMax-M2.7 | $0.525 |
| 6 | KAT Coder Pro V2 | $0.525 |
| 7 | MiniMax-M2.5 | $0.525 |
| 8 | Qwen3.6 35B A3B | $0.557 |
| 9 | GPT-5 mini | $0.688 |
| 10 | MiMo-V2.5 | $0.80 |