The infrastructure that powers AI is consolidating faster than the models themselves, and the winners are no longer the labs publishing papers but the companies controlling compute, data centers, and distribution channels. Anthropic's deal with SpaceX for xAI computing resources, Samsung crossing the $1 trillion valuation on chip demand, TSMC backing renewables to handle the energy crunch, and Arm projecting $2 billion in AI chip sales from next year all point to the same shift: the money and leverage are flowing to whoever can supply the electricity, silicon, and bandwidth that models require. Meanwhile, the actual AI companies are becoming tenants in someone else's infrastructure play. xAI's real business may be more about building data centers than training models. SpaceX is proposing a $119 billion chip factory in Texas. Anthropic is renting capacity from xAI. This is not competition between labs. This is the emergence of a utility layer that will outlast any particular model architecture.
The second pattern is regulatory capture dressed up as safety. Trump's sudden endorsement of AI safety testing after the Mythos incident, combined with the Center for AI Standards and Innovation signing pre-deployment evaluation agreements with Google DeepMind, Microsoft, and xAI, creates a system where the incumbents get to vet their own competitors before launch. The government gets a seat at the table and calls it oversight. The big labs get a filter that slows down anyone trying to ship faster. DeepSeek's $45 billion valuation from its first round, built on models trained at a fraction of US compute costs, is the threat that makes this regulatory theater necessary. When a Chinese lab can outflank you on efficiency, suddenly you want a government agency reviewing what gets released. The deal between Anthropic and SpaceX also reveals something simpler: the founders and investors who shaped OpenAI's early direction never stopped fighting over it. Greg Brockman's disclosure that his stake is worth $30 billion, Shivon Zilis's testimony about Musk trying to recruit Altman to Tesla in 2017, the public history of the lawsuit and the restructuring all show that the real story is not about AI safety or alignment. It is about who owns the upside when the technology works.
The third signal is that labor is being automated selectively, not universally. Match Group is slowing hiring to pay for AI tools. Zillow is using AI for productivity gains on its platform. Apple paid $250 million to settle a lawsuit over overpromising Siri's AI features. But the MIT study showing that US companies target automation at employees earning wage premiums, not at low-wage workers, reveals the actual pattern: AI is being used as a tool to suppress wages for workers with leverage, not to replace workers entirely. The Snap-Perplexity deal falling through also matters here. A $400 million integration that was supposed to lock Perplexity into Snapchat's distribution ended amicably, which is code for: Perplexity realized it did not need Snap's users enough to give up equity or control. Distribution is fragmenting. Google is adding Reddit quotes to search. Ethos is onboarding 35,000 experts per week through voice. The assumption that one platform or model would own the entire value chain is breaking down. The winners will be whoever controls the compute and whoever controls the users. Everyone else is renting.
Sloane Duvall