The gap between what AI systems can do and what we've convinced ourselves they can do is widening faster than the systems themselves improve. Premier League betting defeats Google, OpenAI, Anthropic, and xAI alike, yet the same week we're debating whether AI will flatten corporate hierarchies and reshape semiconductor demand. Sam Altman is fielding questions about trustworthiness while the infrastructure for verifying truth online crumbles under the weight of synthetic content. The real pressure isn't on AI to get better at soccer predictions or corporate strategy; it's on the institutions supposed to stand between these systems and consequential decisions. When the FBI can exploit push notifications, when satellite data restrictions blur verification, when chatbots train people to outsource moral friction instead of working through it, the question isn't whether AI models are capable. It's whether we've built any meaningful resistance to using them anyway, and whether the people profiting from deployment have any incentive to slow down while we figure it out.
Sloane Duvall