OpenAI is doubling down on infrastructure and behavioral control. The Stargate announcement signals a capital-intensive bet on compute capacity as the binding constraint for AGI development, while the goblin analysis reveals the company is now forensically examining why its own models produce unwanted outputs, a pivot from dismissing such quirks as harmless to treating them as a systems problem worth fixing. The cybersecurity framework announcement wraps this work in policy language, but the real signal is OpenAI's recognition that as models scale, their failure modes scale too. Meanwhile, the rest of the field is fragmenting along different pressures: Hugging Face is flagging evaluation as the new computational bottleneck, suggesting the open-source ecosystem sees a different constraint than OpenAI does; IBM and MIT are betting on quantum-AI convergence as a long-term hedge; Anthropic is benchmarking Claude's specialized capabilities in narrow domains like bioinformatics; Mistral is shipping agent infrastructure; and NVIDIA, sitting at the center of all compute demand, is simply hosting a financial call, no announcement needed when every player's capital plan flows through your data center. The pattern is clear: whoever controls the compute controls the pace of capability development, and everyone is positioning accordingly.
Sloane Duvall
A curated reference of models from major AI labs, with open/closed weight status, input modalities, and context window size. American labs tend towards closed weights models and Chinese labs tend toward open weights models.
None
None
None
None
None
None