Today's signal

Andrej Karpathy posted a thread on April 10 arguing that there is a growing and largely invisible gap in how people understand AI capability. Most people formed their views on free, outdated models. The people who use frontier agentic tools like OpenAI Codex and Claude Code professionally are experiencing something else entirely: models that can restructure an entire codebase in an hour. These two groups are speaking past each other.

Why it matters

The reason coding has taken the biggest hit from AI is structural, not accidental. Code has verifiable reward functions: a unit test either passes or it doesn't. That makes it amenable to reinforcement learning in a way that writing simply isn't. There is no clean signal for whether a paragraph is good. Karpathy also pointed out that coding delivers more B2B value, so the biggest fraction of every AI lab's team is focused on improving it. Writing was never the low-hanging fruit. It just looked that way because everyone could see the output. What they couldn't see was how hard it is to train a model to be a good storyteller.

The take

The popular narrative had it backwards. Writing was supposed to fall first because anyone could judge it. But "anyone can judge it" is exactly why AI can't easily optimize it. Coding fell first because machines can judge it. The irony is sharp: the harder the output is to evaluate with a human, the easier it is for AI to master. Storytelling isn't surviving because it's sacred. It's surviving because it's unmeasurable.

The number

2 groups. That's Karpathy's framework: those who formed their AI opinions on free, outdated models, and those using frontier agentic tools professionally. The gap between what these two groups believe AI can do is now larger than the gap between what AI could do in 2023 versus today.

Want the full breakdown of why coding and writing are diverging, and what it means for people who work in both? Read the analysis on Analytics Drift.

Keep reading