Today's signal

Stanford HAI released its 2026 AI Index today — 423 pages, nine years of data, no marketing spin. The headline: AI capabilities are hitting records across every benchmark, adoption is outpacing the internet and the PC, and the most powerful models are now the least transparent ones on the planet.

Why it matters

This is the report that cuts through hype in both directions. On performance: on SWE-bench Verified, coding performance rose from 60% to near 100% of the human baseline in a single year. On adoption: GenAI hit 53% global population adoption within three years, faster than the PC or the internet. But buried in the same report is a number the industry would rather you skip: the Foundation Model Transparency Index dropped to 40 from 58 last year, with the most capable models disclosing the least. More power, less accountability. That's the trade the labs are making.

The take

Stanford's data reveals a deliberate trade-off: as the models get more powerful, the labs are making them less legible. The Foundation Model Transparency Index dropped from 58 to 40 in a single year. Google, Anthropic, and OpenAI all stopped disclosing training data sizes. Eighty of the 95 most notable 2025 models shipped with no training code. The industry is asking the world to trust AI more while sharing less about how it actually works. That's not a technical constraint. That's a choice.

The number

89% The number of AI researchers and developers moving to the US has dropped 89% since 2017, with an 80% decline in the last year alone. America is outspending every country on AI by a factor of 23. It is simultaneously becoming a place the world's best AI talent no longer wants to move to. That's not a talent pipeline. That's a talent cliff.

Want the full breakdown — transparency collapse, the US-China gap, and what $285 billion in private investment actually bought? Read the extended analysis on Analytics Drift.

Keep reading