Google is publishing work on how to measure whether language models behave the way their builders intend, while AWS is shipping operational tooling to enforce safety policies across multiple customer accounts from a single control plane. The two announcements sit at different layers of the problem: one is research into alignment measurement, the other is infrastructure for compliance enforcement. AWS's move is the more immediate signal of where the market is moving, enterprises want centralized governance over AI deployments, and the cloud provider that owns the account structure can offer that as a service. Google's research into behavioral alignment is necessary groundwork, but it doesn't yet translate into product. The timing suggests both labs recognize that measurement and enforcement are becoming table stakes for enterprise adoption, though only one is currently collecting revenue from it.
Sloane Duvall
A curated reference of models from major AI labs, with open/closed weight status, input modalities, and context window size. American labs tend towards closed weights models and Chinese labs tend toward open weights models.
None
None
None
None
None
None
None