The SWE-rebench rankings show no movement at the top tier, with Claude Code, Junie, and Claude Opus 4.6 holding their positions at 52.9%, 52.1%, and 51.7% respectively, while the Artificial Analysis benchmark reveals more volatility across its 323-model leaderboard. Claude Opus 4.5 dropped significantly on SWE-rebench from a reported 49.7 on Artificial Analysis to 43.8%, falling from position 8 to 12, a 5.9-point decline that suggests either a methodology shift between benchmarks or performance degradation under the SWE-rebench evaluation protocol. Kimi K2 Thinking climbed 15 positions on Artificial Analysis (from 40.9 to 43.8 on SWE-rebench, now ranking 13th), while GLM-5 fell 7 positions despite maintaining reasonable scores, dropping from 49.8 on Artificial Analysis to 42.1 on SWE-rebench. The divergence between Gemini 3 Pro Preview's performance on the two benchmarks is notable: it scores 48.4 on Artificial Analysis but only 46.7 on SWE-rebench, ranking 11th versus 8th respectively, suggesting the benchmarks weight different aspects of coding capability or test different problem distributions. Mid-tier models show the most churn: Kimi K2.5 dropped 8 positions on SWE-rebench (37.9%) despite scoring 46.8 on Artificial Analysis, and GLM-4.6 climbed from position 53 to 22 on Artificial Analysis but remains at 37.1% on SWE-rebench at position 22, indicating the benchmarks are not measuring identical capabilities. The absence of clear correlation between SWE-rebench and Artificial Analysis rankings across the full dataset suggests these evaluations are testing distinct problem spaces or that the SWE-rebench protocol imposes constraints (likely around repository-level code generation and integration) that don't map cleanly to general coding performance.
Cole Brennan
Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.
| # | Model | Score |
|---|---|---|
| 1 | Claude Code | 52.9% |
| 2 | Junie | 52.1% |
| 3 | Claude Opus 4.6 | 51.7% |
| 4 | gpt-5.2-2025-12-11-xhigh | 51.7% |
| 5 | gpt-5.2-2025-12-11-medium | 51.0% |
| 6 | gpt-5.1-codex-max | 48.5% |
| 7 | Claude Sonnet 4.5 | 47.1% |
| 8 | Gemini 3 Pro Preview | 46.7% |
| 9 | Gemini 3 Flash Preview | 46.7% |
| 10 | gpt-5.2-codex | 45.0% |
Artificial Analysis composite index across coding, math, and reasoning benchmarks.
| # | Model | Score | tok/s | $/1M |
|---|---|---|---|---|
| 1 | Gemini 3.1 Pro Preview | 57.2 | 121 | $4.50 |
| 2 | GPT-5.4 | 57 | 80 | $5.63 |
| 3 | GPT-5.3 Codex | 54 | 70 | $4.81 |
| 4 | Claude Opus 4.6 | 53 | 62 | $10.00 |
| 5 | Claude Sonnet 4.6 | 51.7 | 69 | $6.00 |
| 6 | GPT-5.2 | 51.3 | 77 | $4.81 |
| 7 | GLM-5 | 49.8 | 68 | $1.55 |
| 8 | Claude Opus 4.5 | 49.7 | 72 | $10.00 |
| 9 | GPT-5.2 Codex | 49 | 92 | $4.81 |
| 10 | Grok 4.20 Beta 0309 | 48.5 | 264 | $3.00 |
Output tokens per second — higher is faster. Minimum intelligence score of 40.
| # | Model | tok/s |
|---|---|---|
| 1 | Grok 4.20 Beta 0309 | 264 |
| 2 | GPT-5 Codex | 207 |
| 3 | Gemini 3 Flash Preview | 183 |
| 4 | Qwen3.5 122B A10B | 157 |
| 5 | GPT-5.1 Codex | 135 |
| 6 | MiMo-V2-Flash | 129 |
| 7 | Gemini 3.1 Pro Preview | 121 |
| 8 | Gemini 3 Pro Preview | 118 |
| 9 | GPT-5.1 | 117 |
| 10 | Kimi K2 Thinking | 99 |
Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.
| # | Model | $/1M |
|---|---|---|
| 1 | MiMo-V2-Flash | $0.15 |
| 2 | DeepSeek V3.2 | $0.315 |
| 3 | MiniMax-M2.5 | $0.525 |
| 4 | GPT-5 mini | $0.688 |
| 5 | Qwen3.5 27B | $0.825 |
| 6 | GLM-4.7 | $1.00 |
| 7 | Kimi K2 Thinking | $1.07 |
| 8 | Qwen3.5 122B A10B | $1.10 |
| 9 | Gemini 3 Flash Preview | $1.13 |
| 10 | Kimi K2.5 | $1.20 |