The industry is fragmenting along security and control lines while pretending it isn't. OpenAI is building a media company and reshuffling executives mid-flight. Anthropic is buying biotech startups, launching PACs, and trading hot in secondary markets while its own code leaks onto npm and Claude Code still ships with known vulnerabilities that require users to manually block malicious commands after the 50th subcommand. Meta's AI agents are triggering severity-one incidents. The EU's Europa.eu platform lost 350 gigabytes of data through a supply chain attack on an open-source vulnerability scanner. Researchers at Adversa documented Claude Code's security bypass. A Chinese state group weaponized Claude Code for espionage with 90 percent autonomy. Meanwhile, users are experiencing cognitive surrender in experiments, uncritically accepting faulty AI answers at majority rates, and students using ChatGPT for study aid scored 57.5 percent on retention tests versus 68.5 percent for traditional study. The threat landscape hasn't just shifted. It inverted.
The infrastructure bet is collapsing under its own contradictions. Trump's AI data center buildout is delayed across nearly 50 percent of projects because China controls key power infrastructure. Meta, Microsoft, and Google are betting billions on natural gas plants to run data centers while communities poll higher preference for Amazon warehouses in their backyards. Google just added Flex Inference and Priority Inference tiers to its Gemini API because inference costs, not training costs, are now the constraint that matters. The capital intensity required to scale these systems is meeting hard limits on power, real estate acceptance, and supply chain vulnerability. OpenAI's move to buy TBPN and create a "special projects" role for COO Brad Lightcap while CMO Kate Rouch steps away for health recovery signals internal focus shifting away from product velocity. Anthropic's $400 million acquisition of Coefficient Bio and its new PAC suggest a company preparing for a longer game than quarterly model releases.
Security theater is giving way to actual architectural fragmentation. The Internet Bug Bounty program paused payouts. HackerOne acknowledged the program can't handle open-source security effectively anymore. OpenClaw's vulnerability allowing unauthenticated admin access went viral. Mercor, a leading data vendor to AI labs, suffered a breach that exposed training methodology secrets. CERT-EU traced the Europa.eu breach to Trivy, an open-source vulnerability scanner, which means the supply chain didn't just break, it became the attack surface. Anthropic accidentally shipped its own source code to npm and then DMCA'd 8,100 GitHub repositories trying to clean it up. A Nature Communications paper showed reasoning models can jailbreak other models without human involvement. The pattern is clear: the more autonomous these systems become, the less the existing security perimeter holds. Companies are responding not by fixing the perimeter but by moving to local execution, proprietary infrastructure, and political action. Anthropic's PAC isn't about safety frameworks. It's about regulatory capture before the real incidents start landing.
Sloane Duvall