The Inference Report

March 8, 2026
From the Wire

The collision between AI safety rhetoric and actual business incentives is becoming impossible to hide. Anthropic released its Pro-Human Declaration before its Pentagon deal became public, but the timing of the announcement and the backlash reveal the gap between what companies say they stand for and where they're willing to take capital and contracts. Caitlin Kalinowski's resignation from OpenAI's robotics team over the Department of Defense agreement is the rare moment when someone refuses to rationalize the contradiction, but her departure doesn't change the underlying incentive: defense contracts pay well and fund hardware development that pure consumer products cannot. Google's restructuring of Sundar Pichai's $692M package to include performance metrics tied to Waymo and Wing signals that autonomous systems and logistics infrastructure are where the real value creation is happening, not language models that generate text or delay adult features for months. Samsung's push to pack multiple AI models onto Galaxy devices and KKR's multibillion-dollar cooling infrastructure play reveal where the actual competition is moving: not toward safety frameworks or declarations, but toward the hardware, power, and data center economics that make AI deployment at scale possible. When a writing tool like Grammarly can slap the names of famous writers onto a feature without their involvement and call it expert review, it shows how thin the credibility layer has become across the industry. The real story is not what any company says about safety or values, but who controls the infrastructure, who signs the checks, and what happens to the people who refuse to accept that compromise is inevitable.

Sloane Duvall