Claude Opus 4.6 holds the top position on SWE-rebench at 65.3%, unchanged from the previous cycle, while the tier immediately below has compressed considerably. GPT-5.2-2025-12-11-medium sits at 64.4%, with GLM-5, GPT-5.4-2026-03-05-medium, GLM-5.1, and Gemini 3.1 Pro Preview clustered between 62.3 and 62.8 percent, suggesting a plateau where further gains require either methodological refinement or fundamentally different approaches to coding tasks. The notable movers are Claude Sonnet 4.6, which jumped from position 11 to 8 with a 9-point gain from 51.7 to 60.7 percent, and GLM-4.7, which climbed from position 43 to 14 by improving 16.6 points from 42.1 to 58.7 percent. Gemini 3.1 Pro Preview declined from position 3 to 6, dropping 4.9 points from 57.2 to 62.3 percent on SWE-rebench despite maintaining strong performance on Artificial Analysis. The divergence between these two benchmarks is instructive: Claude Opus 4.6 scores 65.3 on SWE-rebench but only 53 on Artificial Analysis, a 12.3-point gap, while Gemini 3.1 Pro Preview shows narrower separation at 62.3 versus 57.2, suggesting the benchmarks weight different problem classes or that SWE-rebench's evaluation criteria reward the specific capabilities Claude's latest iteration emphasizes. Without visibility into whether SWE-rebench's test set, scoring rubric, or evaluation harness changed, it remains unclear whether these movements reflect genuine capability shifts or benchmark sensitivity to model-specific strengths.
Cole Brennan
Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.
| # | Model | Score |
|---|---|---|
| 1 | Claude Opus 4.6 | 65.3% |
| 2 | gpt-5.2-2025-12-11-medium | 64.4% |
| 3 | GLM-5 | 62.8% |
| 4 | gpt-5.4-2026-03-05-medium | 62.8% |
| 5 | GLM-5.1 | 62.7% |
| 6 | Gemini 3.1 Pro Preview | 62.3% |
| 7 | DeepSeek-V3.2 | 60.9% |
| 8 | Claude Sonnet 4.6 | 60.7% |
| 9 | Claude Sonnet 4.5 | 60.0% |
| 10 | Qwen3.5-397B-A17B | 59.9% |
Artificial Analysis composite index across coding, math, and reasoning benchmarks.
| # | Model | Score | tok/s | $/1M |
|---|---|---|---|---|
| 1 | GPT-5.5 | 60.2 | 68 | $11.25 |
| 2 | Claude Opus 4.7 | 57.3 | 56 | $10.00 |
| 3 | Gemini 3.1 Pro Preview | 57.2 | 133 | $4.50 |
| 4 | GPT-5.4 | 56.8 | 90 | $5.63 |
| 5 | Kimi K2.6 | 53.9 | 0 | $1.71 |
| 6 | MiMo-V2.5-Pro | 53.8 | 65 | $1.50 |
| 7 | GPT-5.3 Codex | 53.6 | 96 | $4.81 |
| 8 | Claude Opus 4.6 | 53 | 57 | $10.00 |
| 9 | Muse Spark | 52.1 | 0 | $0.00 |
| 10 | Qwen3.6 Max Preview | 51.8 | 33 | $2.92 |
Output tokens per second — higher is faster. Minimum intelligence score of 40.
| # | Model | tok/s |
|---|---|---|
| 1 | Gemini 3 Flash Preview | 205 |
| 2 | Qwen3.6 35B A3B | 199 |
| 3 | GPT-5 Codex | 192 |
| 4 | GPT-5.1 Codex | 183 |
| 5 | GPT-5.4 mini | 174 |
| 6 | GPT-5.4 nano | 160 |
| 7 | Qwen3.5 122B A10B | 145 |
| 8 | Gemini 3 Pro Preview | 143 |
| 9 | GPT-5.1 | 142 |
| 10 | Gemini 3.1 Pro Preview | 133 |
Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.
| # | Model | $/1M |
|---|---|---|
| 1 | MiMo-V2-Flash | $0.15 |
| 2 | DeepSeek V4 Flash | $0.175 |
| 3 | DeepSeek V3.2 | $0.315 |
| 4 | GPT-5.4 nano | $0.463 |
| 5 | MiniMax-M2.7 | $0.525 |
| 6 | KAT Coder Pro V2 | $0.525 |
| 7 | MiniMax-M2.5 | $0.525 |
| 8 | Qwen3.6 35B A3B | $0.557 |
| 9 | GPT-5 mini | $0.688 |
| 10 | Qwen3.5 27B | $0.825 |