The real competition in AI isn't between models or platforms anymore, it's between who gets to set the rules for what these systems can and cannot do. Bluesky is building feed customization on top of AI, Anthropic is doubling down on consumer subscriptions by maintaining boundaries around its models even when the Pentagon wants access, and Stanford researchers are measuring the concrete harms of chatbot sycophancy precisely because the industry has no agreed-upon standard for what constitutes acceptable behavior. Meanwhile, the infrastructure is spreading faster than the governance: AI-driven border surveillance is rolling out across West Africa with minimal oversight, recruiters are actively working around AI hiring systems because they've lost confidence in the outputs, and Musk's xAI has hemorrhaged nearly all its co-founders, suggesting that even within a single company, there's no consensus on what the technology should be used for or how it should operate. The pattern underneath is that capital and technical capability have outpaced institutional agreement on control. Anthropic can afford to tell the Pentagon no because it has consumer revenue. Bluesky can layer AI onto an open protocol because there's no single gatekeeper. But that same fragmentation means a recruiter in one country and a border official in another face the same AI system with zero coordination on what it should optimize for, profit, security, fairness, or speed. The question isn't whether AI is good or bad. It's whether any company, government, or standard-setting body can actually enforce a coherent answer.
Sloane Duvall