Claude Opus 4.6 jumped from fourth to first on SWE-rebench, posting 65.3% versus its previous 53%, while gpt-5.2-2025-12-11-medium climbed to second at 64.4%, up from 51.0%. GLM-5 reached third at 62.8% after sitting at 49.8%, and DeepSeek-V3.2 vaulted from twenty-first at 37.5% to sixth at 60.9%, a 23.4-point gain that represents the largest movement in the cohort. The top tier has consolidated upward across the board: Gemini 3.1 Pro Preview slipped to fifth despite improving from 57.2% to 62.3%, a reversal that signals the benchmark itself has tightened or recalibrated rather than merely reflecting model improvement. Claude Code, which led the prior ranking at 52.9%, now sits fourteenth at 58.4%, and Junie fell from second at 52.1% to eleventh at 59.5%, both absolute gains masked by steeper climbs elsewhere. The Artificial Analysis benchmark shows less volatility: GPT-5.4 and Gemini 3.1 Pro Preview tie at 57.2%, unchanged from the previous period, while Claude Opus 4.6 holds at 53 in that metric despite its 12.3-point leap on SWE-rebench. The divergence between benchmarks suggests SWE-rebench scores are shifting faster than Artificial Analysis scores, which may reflect either different evaluation methodologies, different task distributions, or recalibration of the SWE-rebench itself. Twelve new entries appeared in the SWE-rebench top twenty-eight, including gpt-5.4-2026-03-05-medium at fourth and Qwen3.5-397B-A17B at ninth, while multiple prior top performers dropped entirely, indicating substantial churn in which models are being evaluated. Without access to the specific SWE-rebench problem set or scoring methodology changes between periods, the magnitude of these shifts warrants scrutiny: improvements of 10 to 20 points across the board could reflect genuine model advances, or they could signal that the benchmark has been modified, expanded, or re-baselined in ways that affect comparability.
Cole Brennan
Daily rankings from SWE-rebench, a benchmark designed to fairly compare LLM capabilities on real-world software engineering tasks. Unlike other evaluations, it uses a standardized scaffolding for all models, continuously updates its dataset to prevent contamination, and runs each model five times to account for stochastic variance.
| # | Model | Score |
|---|---|---|
| 1 | Claude Opus 4.6 | 65.3% |
| 2 | gpt-5.2-2025-12-11-medium | 64.4% |
| 3 | GLM-5 | 62.8% |
| 4 | gpt-5.4-2026-03-05-medium | 62.8% |
| 5 | Gemini 3.1 Pro Preview | 62.3% |
| 6 | DeepSeek-V3.2 | 60.9% |
| 7 | Claude Sonnet 4.6 | 60.7% |
| 8 | Claude Sonnet 4.5 | 60.0% |
| 9 | Qwen3.5-397B-A17B | 59.9% |
| 10 | Step-3.5-Flash | 59.6% |
Artificial Analysis composite index across coding, math, and reasoning benchmarks.
| # | Model | Score | tok/s | $/1M |
|---|---|---|---|---|
| 1 | GPT-5.4 | 57.2 | 82 | $5.63 |
| 2 | Gemini 3.1 Pro Preview | 57.2 | 117 | $4.50 |
| 3 | GPT-5.3 Codex | 54 | 65 | $4.81 |
| 4 | Claude Opus 4.6 | 53 | 47 | $10.00 |
| 5 | Claude Sonnet 4.6 | 51.7 | 54 | $6.00 |
| 6 | GPT-5.2 | 51.3 | 67 | $4.81 |
| 7 | GLM-5 | 49.8 | 89 | $1.55 |
| 8 | Claude Opus 4.5 | 49.7 | 53 | $10.00 |
| 9 | MiniMax-M2.7 | 49.6 | 44 | $0.525 |
| 10 | MiMo-V2-Pro | 49.2 | 0 | $0.00 |
Output tokens per second — higher is faster. Minimum intelligence score of 40.
| # | Model | tok/s |
|---|---|---|
| 1 | GPT-5.4 mini | 233 |
| 2 | GPT-5.4 nano | 221 |
| 3 | Gemini 3 Flash Preview | 185 |
| 4 | GPT-5 Codex | 170 |
| 5 | Qwen3.5 122B A10B | 152 |
| 6 | MiMo-V2-Flash | 139 |
| 7 | GPT-5.1 Codex | 127 |
| 8 | GPT-5.1 | 124 |
| 9 | Gemini 3 Pro Preview | 119 |
| 10 | Gemini 3.1 Pro Preview | 117 |
Blended cost per 1M tokens (3:1 input/output) — lower is cheaper. Minimum intelligence score of 40.
| # | Model | $/1M |
|---|---|---|
| 1 | MiMo-V2-Flash | $0.15 |
| 2 | DeepSeek V3.2 | $0.315 |
| 3 | GPT-5.4 nano | $0.463 |
| 4 | MiniMax-M2.7 | $0.525 |
| 5 | MiniMax-M2.5 | $0.525 |
| 6 | GPT-5 mini | $0.688 |
| 7 | Qwen3.5 27B | $0.825 |
| 8 | GLM-4.7 | $1.00 |
| 9 | Kimi K2 Thinking | $1.07 |
| 10 | Qwen3.5 122B A10B | $1.10 |