Claude Opus 4.6 holds the SWE-rebench lead at 65.3%, a 12.3-point gain from its prior position of 53 on Artificial Analysis, while Gemini 3.1 Pro Preview dropped from first place on Artificial Analysis (57.2) to fifth on SWE-rebench (62.3), suggesting the two benchmarks measure different aspects of code generation capability or apply distinct evaluation criteria. GLM-5 and Kimi K2.5 both climbed sharply on SWE-rebench relative to their Artificial Analysis rankings, with GLM-5 jumping from 49.8 to 62.8 and Kimi K2.5 from 46.8 to 58.5, indicating either improved performance on repository-level tasks or a methodology that rewards certain architectural choices more than point-estimate evaluations do. The top five models on SWE-rebench (Claude Opus 4.6, gpt-5.2-2025-12-11-medium, GLM-5, gpt-5.4-2026-03-05-medium, and Gemini 3.1 Pro Preview) all score between 62.3 and 65.3 percent, a compressed band that contrasts with wider spreads in earlier rankings, though without visibility into SWE-rebench's task composition, evaluation protocol, or error bars, it remains unclear whether these clustering differences reflect genuine convergence in code-solving ability or artifacts of test set size and difficulty distribution. The Artificial Analysis list shows no new entries at position 310 (Llama 2 Chat 13B at 8.4), a minor data inconsistency that does not affect the overall ranking coherence. Both benchmarks show deep-model dominance, but the divergence between them in how frontier models rank underscores that code generation performance is not monolithic: repository-level problem-solving, as measured by SWE-rebench, appears to reward different capabilities than the aggregated tasks tracked by Artificial Analysis.
Cole Brennan
Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.
| # | Model | Score |
|---|---|---|
| 1 | Claude Opus 4.6 | 65.3% |
| 2 | gpt-5.2-2025-12-11-medium | 64.4% |
| 3 | GLM-5 | 62.8% |
| 4 | gpt-5.4-2026-03-05-medium | 62.8% |
| 5 | Gemini 3.1 Pro Preview | 62.3% |
| 6 | DeepSeek-V3.2 | 60.9% |
| 7 | Claude Sonnet 4.6 | 60.7% |
| 8 | Claude Sonnet 4.5 | 60.0% |
| 9 | Qwen3.5-397B-A17B | 59.9% |
| 10 | Step-3.5-Flash | 59.6% |
Artificial Analysis composite index across coding, math, and reasoning benchmarks.
| # | Model | Score | tok/s | $/1M |
|---|---|---|---|---|
| 1 | Gemini 3.1 Pro Preview | 57.2 | 132 | $4.50 |
| 2 | GPT-5.4 | 56.8 | 79 | $5.63 |
| 3 | GPT-5.3 Codex | 53.6 | 81 | $4.81 |
| 4 | Claude Opus 4.6 | 53 | 47 | $10.00 |
| 5 | Muse Spark | 52.1 | 0 | $0.00 |
| 6 | Claude Sonnet 4.6 | 51.7 | 59 | $6.00 |
| 7 | GLM-5.1 | 51.4 | 53 | $2.15 |
| 8 | GPT-5.2 | 51.3 | 75 | $4.81 |
| 9 | Qwen3.6 Plus | 50 | 48 | $1.13 |
| 10 | GLM-5 | 49.8 | 83 | $1.55 |
Output tokens per second — higher is faster. Minimum intelligence score of 40.
| # | Model | tok/s |
|---|---|---|
| 1 | Gemini 3 Flash Preview | 189 |
| 2 | GPT-5.4 mini | 179 |
| 3 | GPT-5.4 nano | 173 |
| 4 | Grok 4.20 0309 | 165 |
| 5 | Grok 4.20 0309 v2 | 163 |
| 6 | GPT-5 Codex | 162 |
| 7 | GPT-5.1 Codex | 158 |
| 8 | Gemini 3 Pro Preview | 134 |
| 9 | Gemini 3.1 Pro Preview | 132 |
| 10 | Qwen3.5 122B A10B | 128 |
Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.
| # | Model | $/1M |
|---|---|---|
| 1 | MiMo-V2-Flash | $0.15 |
| 2 | DeepSeek V3.2 | $0.315 |
| 3 | GPT-5.4 nano | $0.463 |
| 4 | MiniMax-M2.7 | $0.525 |
| 5 | KAT Coder Pro V2 | $0.525 |
| 6 | MiniMax-M2.5 | $0.525 |
| 7 | GPT-5 mini | $0.688 |
| 8 | Qwen3.5 27B | $0.825 |
| 9 | GLM-4.7 | $1.00 |
| 10 | Kimi K2 Thinking | $1.07 |