The labs are fragmenting into distinct competitive zones, each signaling where they believe defensible advantage lies. OpenAI is hardening account security, a reactive move that speaks to the vulnerability of centralized authentication in a high-value target environment. Google DeepMind is positioning in healthcare, framing AI as a co-clinician rather than a replacement, which hedges regulatory risk while staking territorial claims in a sector where adoption barriers are high and switching costs matter. NVIDIA, by contrast, is playing infrastructure and volume: OpenClaw crossed 100,000 GitHub stars by January, and the company is shipping cloud gaming integrations at scale, treating developer adoption and embedded deployment as the real moat. IBM is betting on physics-based foundation models for automotive design alongside Dallara, narrowing focus to a specific vertical where domain-specific training compounds value. GitHub is documenting CLI modes for Copilot, a move that signals the shift from novelty to operational embedding, making the tool legible to beginners suggests the company is optimizing for organizational penetration rather than expert adoption. Anthropic's note on personal guidance queries is the only announcement that touches on what users actually do with these systems once they own them, hinting at a use case frontier that most labs are still ignoring. The pattern is clear: the labs that are winning are either controlling infrastructure (NVIDIA), embedding into developer workflows (GitHub), or targeting regulated verticals with high switching costs (Google, IBM). Account security and personal guidance are afterthoughts to that core competition.
Sloane Duvall
A curated reference of models from major AI labs, with open/closed weight status, input modalities, and context window size. American labs tend towards closed weights models and Chinese labs tend toward open weights models.
None
None
None
None
None
None