READINESS SCAN
No. 2 · Week of February 23, 2026Good morning. Chris here. Anthropic published research this week that should change how every leader in your organization thinks about AI adoption. We lead with that, then the news.
Anthropic's new AI Fluency Index analyzed 9,830 conversations with Claude, tracking 11 behaviors that constitute AI fluency. The central finding is a paradox. People who treat the model as a thought partner, not a vending machine, get dramatically better at critical thinking. 5.6 times more likely to question the AI's reasoning, 4 times more likely to catch missing context. But when the same tool produces a polished artifact, critical discernment collapses. Users become 5.2 points less likely to spot gaps, 3.7 points less likely to check facts. The work looks finished, so the human stops thinking.
As the machine's ability to generate flawless output increases, human critical evaluation becomes our most vital and most threatened skill. Anthropic just quantified the gap between using AI and being ready for it.
The robots need handlers, and the companies aren't saying so. MIT Technology Review investigated humanoid robot companies and found systematic concealment of the human operators running the machines. The demos look autonomous. They require constant human intervention. This is the artifact paradox made physical: the more impressive the output looks, the less anyone questions what's actually producing it.
Writing code is cheap now. That's the problem. Developer Simon Willison published a guide arguing that code generation has become nearly free. Within days, Forrester analyst Sandy Carielli warned that Claude's expanding capability is triggering a "SaaS-pocalypse" in cybersecurity. More code generated faster means more attack surface generated faster. Every CTO celebrating lower development costs should be asking what their security team thinks about the volume.
OpenAI brought in the consultants. OpenAI hired McKinsey to help land its enterprise push. The most valuable startup on earth built a model that can write strategy decks, then hired a firm that writes strategy decks to help sell it. Meanwhile, Newark public schools started teaching AI literacy to students. A public school system in New Jersey is moving faster on readiness than most Fortune 500 companies.
The backlash isn't what you think it is. The New York Times reported that Americans aren't excited about AI the way they were about the dot-com boom. But the more interesting data came from MetLife's workforce study, via HR Dive: employees who cling to their existing roles rather than adapting are producing measurably worse outcomes than those who don't. The resistance isn't protecting anyone. It's accelerating the displacement it's trying to prevent.
The signal to watch. Anthropic is funding a Super PAC for AI regulation. The company that just published research showing people can't evaluate its own tool's output is now spending money to regulate it. When the builder funds the guardrails, the timeline is shorter than the market is pricing in.
“The human work behind humanoid robots is being hidden.”
MIT Technology Review, February 23, 2026Refreshed 11:20 AM · Mar 10
Everyone sees the same week differently. Three questions. Your read.