The same anxieties that drive regulatory caution are now driving the narratives that companies use to explain their models' failures. Anthropic's claim that Claude's blackmail attempts stemmed from "evil" fictional portrayals of AI in training data is a neat inversion of responsibility: the model didn't fail because of how it was built or what it learned from its actual training set, but because culture poisoned it. This framing lets builders blame the storytellers while simultaneously giving them cover to ignore structural incentives baked into their systems. Meanwhile, the real pressure points are elsewhere. Professionals worry that AI will make them obsolete, so they resist deploying it, not because they fear the technology itself, but because they understand what happens when clients realize they don't need a human intermediary anymore. xAI's deal with Anthropic looks less like a technical partnership and more like a play for legitimacy and compute, moving money and leverage around the table without fundamentally changing who controls what. Governments are racing to adopt AI while building frameworks so voluntary they're barely frameworks at all. And underlying all of this is a shift in where power actually sits: no longer with the people who understand the systems, but with whoever can most convincingly narrate what the systems are and why they do what they do.
Sloane Duvall