The gap between what AI companies say they are and what they're charging users to do is widening. Microsoft embeds in Copilot's terms that the tool exists "for entertainment purposes only," yet millions treat it as a work assistant. Anthropic just revoked Claude subscribers' access to OpenClaw, a widely used open-source agent framework, forcing them to buy credits instead, a move that drew immediate pushback on both cost and principle. The pattern is familiar: companies launch products with permissive positioning, build user dependency, then monetize the friction. Meanwhile, the actual deployment of AI is happening in places where liability is lowest and labor alternatives are nonexistent. Japan is moving physical robots from pilots into real work because there is no other choice. South Korea is putting ChatGPT into dolls for elderly care because the social care system is broken. The regulatory conversation, whether auditors are ready for AI, whether data centers belong in your backyard, assumes AI adoption is a policy question. It isn't. It's a labor shortage question. A liability question. A power question. The companies writing disclaimers in small text know exactly what their products will be used for, and they're pricing accordingly.
Sloane Duvall