OpenAI is publishing frameworks and launching bug bounties while simultaneously rolling out customer success stories through its partners, a split signal worth parsing. The Model Spec and Safety Bug Bounty are positioned as governance tools, transparency plays, but they arrive as OpenAI scales deployment through Microsoft's Copilot into enterprise workflows at KPMG and Infobip. Microsoft is not announcing safety infrastructure; it is announcing business impact. Google is fragmenting its AI surface area across music generation, XR prototyping, and embedded Gemini integrations, moving models into specialized contexts rather than consolidating them. NVIDIA's dual message, that AI requires both open and proprietary models, and that data centers can be grid-stabilizing infrastructure, reframes compute density as a utility problem, not a capability race. IBM is bolting third-party voice onto its orchestration layer, IBM watsonx, signaling that the moat is not in the model but in the integration surface. AMD is publishing GPU kernel tutorials, positioning itself as the infrastructure layer that demands education, not black-box abstraction. The pattern across all ten: nobody is claiming to have solved the hard problem. Instead, each is claiming a different layer of the stack, safety frameworks, customer ROI, model variety, compute efficiency, integration platforms, and hardware literacy. The money is moving toward whoever controls the surface where enterprises actually deploy, not toward whoever publishes the most thorough safety spec.
Sloane Duvall
A curated reference of models from major AI labs, with open/closed weight status, input modalities, and context window size. American labs tend towards closed weights models and Chinese labs tend toward open weights models.
None
None
None
None
None
None
None