The Inference Report

April 28, 2026
AI Labs — News

OpenAI is consolidating its position as the infrastructure layer for federal and enterprise deployment while simultaneously tightening its partnership with Microsoft and seeding the market with orchestration standards. The FedRAMP Moderate authorization removes a major barrier to government adoption of ChatGPT Enterprise and the API, while the amended Microsoft agreement clarifies long-term economics and signals confidence in sustained scaling. Symphony, positioned as an open-source spec for agent orchestration, mirrors a familiar playbook: release developer tooling that locks users into your API ecosystem. The Choco case study reinforces this, customer success stories built on OpenAI APIs generate demand without OpenAI bearing customer acquisition costs. IBM's Bob announcement claims 45% productivity gains across 80,000 internal users, suggesting enterprise coding assistance is becoming table stakes rather than differentiation, and the fact that IBM is building its own agent rather than embedding a third-party model indicates where margin pressure is landing. AWS's mention of Anthropic and Meta partnerships, Bedrock AgentCore CLI, and Lambda S3 Files reveals the cloud providers are racing to commoditize agent infrastructure and reduce friction between model consumption and application deployment. AMD's TraceLens addresses a real problem in AI workload optimization but operates in the unsexy infrastructure layer where margins compress fastest. Hugging Face's two announcements, one on specialized imaging and one on privacy-filtered web apps, occupy narrow verticals where adoption depends on solving domain-specific problems rather than general capability. Anthropic's Sydney office opening is geographic expansion, not product news. The pattern across these ten items is clear: the labs are fighting for different layers of the stack, but the real battle is between those selling access to inference (OpenAI, Anthropic via AWS) and those selling the orchestration and optimization layers that make inference useful at scale.

Sloane Duvall

AI Labs — Models

A curated reference of models from major AI labs, with open/closed weight status, input modalities, and context window size. American labs tend towards closed weights models and Chinese labs tend toward open weights models.

usAmazon
Closed Weights
  • Amazon: Nova 2 Lite
    TextVisionVideoFiles1M
  • Amazon: Nova Premier 1.0
    TextVision1M
Open Weights

None

usAnthropic
Closed Weights
  • Anthropic: Claude Haiku 4.5
    VisionText200K
  • Anthropic: Claude Opus 4
    VisionTextFiles200K
  • Anthropic: Claude Opus 4.1
    VisionTextFiles200K
  • Anthropic: Claude Opus 4.5
    FilesVisionText200K
  • Anthropic: Claude Opus 4.6
    TextVision1M
  • Anthropic: Claude Opus 4.6 (Fast)
    TextVision1M
  • Anthropic: Claude Opus 4.7
    TextVision1M
  • Anthropic: Claude Sonnet 4
    VisionTextFiles1M
  • Anthropic: Claude Sonnet 4.5
    TextVisionFiles1M
  • Anthropic: Claude Sonnet 4.6
    TextVision1M
Open Weights

None

usGoogle DeepMind
Closed Weights
  • Google: Gemini 2.5 Flash
    FilesVisionTextAudioVideo1M
  • Google: Gemini 2.5 Flash Lite
    TextVisionFilesAudioVideo1M
  • Google: Gemini 2.5 Flash Lite Preview 09-2025
    TextVisionFilesAudioVideo1M
  • Google: Gemini 2.5 Pro
    TextVisionFilesAudioVideo1M
  • Google: Gemini 2.5 Pro Preview 05-06
    TextVisionFilesAudioVideo1M
  • Google: Gemini 2.5 Pro Preview 06-05
    FilesVisionTextAudio1M
  • Google: Gemini 3 Flash Preview
    TextVisionFilesAudioVideo1M
  • Google: Gemini 3.1 Flash Lite Preview
    TextVisionVideoFilesAudio1M
  • Google: Gemini 3.1 Pro Preview
    AudioFilesVisionTextVideo1M
  • Google: Gemini 3.1 Pro Preview Custom Tools
    TextAudioVisionVideoFiles1M
  • Google: Lyria 3 Clip Preview
    TextVision1M
  • Google: Lyria 3 Pro Preview
    TextVision1M
  • Google: Nano Banana (Gemini 2.5 Flash Image)
    VisionText33K
  • Google: Nano Banana 2 (Gemini 3.1 Flash Image Preview)
    VisionText66K
  • Google: Nano Banana Pro (Gemini 3 Pro Image Preview)
    VisionText66K
Open Weights
  • Google: Gemma 3n 4B
    Text33K
  • Google: Gemma 4 26B A4B
    VisionTextVideo262K
  • Google: Gemma 4 31B
    VisionTextVideo262K
usMeta
Closed Weights

None

Open Weights
  • Meta: Llama Guard 4 12B
    VisionText164K
usOpenAI
Closed Weights
  • OpenAI: GPT Audio
    TextAudio128K
  • OpenAI: GPT Audio Mini
    TextAudio128K
  • OpenAI: GPT-4o Audio
    AudioText128K
  • OpenAI: GPT-5
    TextVisionFiles400K
  • OpenAI: GPT-5 Chat
    FilesVisionText128K
  • OpenAI: GPT-5 Codex
    TextVision400K
  • OpenAI: GPT-5 Image
    VisionTextFiles400K
  • OpenAI: GPT-5 Image Mini
    FilesVisionText400K
  • OpenAI: GPT-5 Mini
    TextVisionFiles400K
  • OpenAI: GPT-5 Nano
    TextVisionFiles400K
  • OpenAI: GPT-5 Pro
    VisionTextFiles400K
  • OpenAI: GPT-5.1
    VisionTextFiles400K
  • OpenAI: GPT-5.1 Chat
    FilesVisionText128K
  • OpenAI: GPT-5.1-Codex
    TextVision400K
  • OpenAI: GPT-5.1-Codex-Max
    TextVision400K
  • OpenAI: GPT-5.1-Codex-Mini
    VisionText400K
  • OpenAI: GPT-5.2
    FilesVisionText400K
  • OpenAI: GPT-5.2 Chat
    FilesVisionText128K
  • OpenAI: GPT-5.2 Pro
    VisionTextFiles400K
  • OpenAI: GPT-5.2-Codex
    TextVision400K
  • OpenAI: GPT-5.3 Chat
    TextVisionFiles128K
  • OpenAI: GPT-5.3-Codex
    TextVisionFiles400K
  • OpenAI: GPT-5.4
    TextVisionFiles1M
  • OpenAI: GPT-5.4 Image 2
    VisionTextFiles272K
  • OpenAI: GPT-5.4 Mini
    FilesVisionText400K
  • OpenAI: GPT-5.4 Nano
    FilesVisionText400K
  • OpenAI: GPT-5.4 Pro
    TextVisionFiles1M
  • OpenAI: GPT-5.5
    FilesVisionText1M
  • OpenAI: GPT-5.5 Pro
    FilesVisionText1M
  • OpenAI: o3 Deep Research
    VisionTextFiles200K
  • OpenAI: o3 Pro
    TextFilesVision200K
  • OpenAI: o4 Mini Deep Research
    FilesVisionText200K
Open Weights
  • OpenAI: gpt-oss-120b
    Text131K
  • OpenAI: gpt-oss-20b
    Text131K
  • OpenAI: gpt-oss-safeguard-20b
    Text131K
usxAI
Closed Weights
  • xAI: Grok 3
    Text131K
  • xAI: Grok 3 Mini
    Text131K
  • xAI: Grok 4
    VisionTextFiles256K
  • xAI: Grok 4 Fast
    TextVisionFiles2M
  • xAI: Grok 4.1 Fast
    TextVisionFiles2M
  • xAI: Grok 4.20
    TextVisionFiles2M
  • xAI: Grok 4.20 Multi-Agent
    TextVisionFiles2M
  • xAI: Grok Code Fast 1
    Text256K
Open Weights

None

frMistral AI
Closed Weights
  • Mistral: Codestral 2508
    Text256K
  • Mistral: Devstral Medium
    Text131K
  • Mistral: Mistral Large 3 2512
    TextVision262K
  • Mistral: Mistral Medium 3
    TextVision131K
  • Mistral: Mistral Medium 3.1
    TextVision131K
  • Mistral: Mistral Small Creative
    Text33K
Open Weights
  • Mistral: Devstral 2 2512
    Text262K
  • Mistral: Devstral Small 1.1
    Text131K
  • Mistral: Ministral 3 14B 2512
    TextVision262K
  • Mistral: Ministral 3 3B 2512
    TextVision131K
  • Mistral: Ministral 3 8B 2512
    TextVision262K
  • Mistral: Mistral Small 3.2 24B
    VisionText128K
  • Mistral: Mistral Small 4
    TextVision262K
  • Mistral: Voxtral Small 24B 2507
    TextAudio32K
ilAI21 Labs
Closed Weights

None

Open Weights
  • AI21: Jamba Large 1.7
    Text256K
cnAlibaba (Qwen)
Closed Weights
  • Qwen: Qwen Plus 0728
    Text1M
  • Qwen: Qwen Plus 0728 (thinking)
    Text1M
  • Qwen: Qwen3 Coder Flash
    Text1M
  • Qwen: Qwen3 Coder Plus
    Text1M
  • Qwen: Qwen3 Max
    Text262K
  • Qwen: Qwen3 Max Thinking
    Text262K
  • Qwen: Qwen3.5 Plus 2026-02-15
    TextVisionVideo1M
  • Qwen: Qwen3.5 Plus 2026-04-20
    TextVisionVideo1M
  • Qwen: Qwen3.5-Flash
    TextVisionVideo1M
  • Qwen: Qwen3.6 Flash
    TextVisionVideo1M
  • Qwen: Qwen3.6 Max Preview
    Text262K
  • Qwen: Qwen3.6 Plus
    TextVisionVideo1M
Open Weights
  • Qwen: Qwen3 14B
    Text41K
  • Qwen: Qwen3 235B A22B
    Text131K
  • Qwen: Qwen3 235B A22B Instruct 2507
    Text262K
  • Qwen: Qwen3 235B A22B Thinking 2507
    Text131K
  • Qwen: Qwen3 30B A3B
    Text41K
  • Qwen: Qwen3 30B A3B Instruct 2507
    Text262K
  • Qwen: Qwen3 30B A3B Thinking 2507
    Text131K
  • Qwen: Qwen3 32B
    Text41K
  • Qwen: Qwen3 8B
    Text41K
  • Qwen: Qwen3 Coder 30B A3B Instruct
    Text160K
  • Qwen: Qwen3 Coder 480B A35B
    Text262K
  • Qwen: Qwen3 Coder Next
    Text262K
  • Qwen: Qwen3 Next 80B A3B Instruct
    Text262K
  • Qwen: Qwen3 Next 80B A3B Thinking
    Text131K
  • Qwen: Qwen3 VL 235B A22B Instruct
    TextVision262K
  • Qwen: Qwen3 VL 235B A22B Thinking
    TextVision131K
  • Qwen: Qwen3 VL 30B A3B Instruct
    TextVision131K
  • Qwen: Qwen3 VL 30B A3B Thinking
    TextVision131K
  • Qwen: Qwen3 VL 32B Instruct
    TextVision131K
  • Qwen: Qwen3 VL 8B Instruct
    VisionText131K
  • Qwen: Qwen3 VL 8B Thinking
    VisionText131K
  • Qwen: Qwen3.5 397B A17B
    TextVisionVideo262K
  • Qwen: Qwen3.5-122B-A10B
    TextVisionVideo262K
  • Qwen: Qwen3.5-27B
    TextVisionVideo262K
  • Qwen: Qwen3.5-35B-A3B
    TextVisionVideo262K
  • Qwen: Qwen3.5-9B
    TextVisionVideo262K
  • Qwen: Qwen3.6 27B
    TextVisionVideo256K
  • Qwen: Qwen3.6 35B A3B
    TextVisionVideo262K
cnByteDance
Closed Weights
  • Seed: Seed 1.6
    VisionTextVideo262K
  • Seed: Seed 1.6 Flash
    VisionTextVideo262K
  • Seed: Seed-2.0-Lite
    TextVisionVideo262K
  • Seed: Seed-2.0-Mini
    TextVisionVideo262K
Open Weights

None

cnDeepSeek
Closed Weights

None

Open Weights
  • DeepSeek: DeepSeek V3.1
    Text33K
  • DeepSeek: DeepSeek V3.1 Terminus
    Text164K
  • DeepSeek: DeepSeek V3.2
    Text131K
  • DeepSeek: DeepSeek V3.2 Exp
    Text164K
  • DeepSeek: DeepSeek V3.2 Speciale
    Text164K
  • DeepSeek: DeepSeek V4 Flash
    Text1M
  • DeepSeek: DeepSeek V4 Pro
    Text1M
  • DeepSeek: R1 0528
    Text164K
cnMiniMax
Closed Weights
  • MiniMax: MiniMax M1
    Text1M
  • MiniMax: MiniMax M2-her
    Text66K
Open Weights
  • MiniMax: MiniMax M2
    Text197K
  • MiniMax: MiniMax M2.1
    Text197K
  • MiniMax: MiniMax M2.5
    Text197K
  • MiniMax: MiniMax M2.7
    Text197K