OpenAI and NVIDIA are both making explicit plays for pipeline control, one through the campus network and the other through direct recruitment messaging, while the technical work happening at the margins suggests the real competition is over who owns the workflows enterprises actually use. OpenAI's campus initiative is straightforward talent cultivation dressed in community language; the company is seeding AI familiarity among students before they enter the job market, which means early adoption of OpenAI tools becomes the default. NVIDIA's commencement address serves a similar function at higher altitude, positioning the company as the inevitable infrastructure layer beneath whatever AI work these graduates eventually do. Meanwhile, Hugging Face's announcement about a multi-agent manufacturing system on AMD hardware is the only piece here that's actually about product, a concrete system solving a specific problem in a specific domain using specific hardware. The gap between what OpenAI and NVIDIA are announcing and what Hugging Face is building reveals where the money isn't flowing as loudly: toward open-source builders solving real manufacturing problems get less stage time than recruitment messaging and enterprise frameworks, but they're the ones generating the use cases that justify the infrastructure layer both NVIDIA and OpenAI depend on.
Sloane Duvall
A curated reference of models from major AI labs, with open/closed weight status, input modalities, and context window size. American labs tend towards closed weights models and Chinese labs tend toward open weights models.
None
None
None
None
None
None
None