February 2026
· · ·

I spent last week writing about what AI does to writers.

On Friday around 2 AM, I shut it down. Opus 4.5 helped me rework it, tighten language, cut paragraphs I didn’t see were dead. The sentences looked clean.

Over coffee the next morning, I read it with my wife. The look on her face said everything: “I don’t know who wrote this, but it’s definitely not you.” Three scenes, none of them connected. Every paragraph sounding right. None of them sounding like me. Five hours of work that read like twenty minutes of original thought. She surfaced the problem in one read. The author was gone.

Like all LLMs, Opus 4.5 can be a trap. It’s good enough to sound right when it’s wrong. Each pass, it told me the draft was improving. The structure is tighter. The voice is landing. At midnight, when you’re looking for permission to stop, those words feel like the truth.

I write in part for a living. I wrote Perspective Agents during lockdown. The person before ChatGPT and the person who writes now aren’t the same. AI is supposed to make the process faster. I’m not sure that’s true unless you want to speak machine. AI shifts the work from the page to the person. Where I used to wrestle with structure and language, I now wrestle with something harder: authorship.

· · ·

I set out to warn people about losing themselves to AI. Then I lost myself writing the essay. The machine filtered what I noticed until my perspective defaulted to the machine’s.

It takes years to develop a voice. Failing in a specific way, over and over, until the failures became a style. AI has no scars. It delivers clean, competent work that could be anyone’s work. The trap mode is auto-pilot. Let the machine produce, accept what comes back. The extraordinary potential is realized in another mode: perspective agent. Use the machine to challenge and expand your thinking, not replace it.

In draft #4 I fell into the trap. I didn’t know until the morning coffee. Authorship is an act of the intellect. What my wife detected was not the absence of my voice. It was the absence of me thinking.

If you work with AI with any depth, you learn delusion can feel like progress. AI models generate output close enough to yours that you stop noticing where you end and it begins. In parts, it reads better. Flashes of brilliance distort your read of the whole.

There’s research backing what happened here: algorithmic dependence. Sycophantic feedback is the most corrosive pattern. It collapses judgment into delusion. The machine validates your work. Your judgment agrees because it atrophied. The countermeasure: productive struggle. Skip it, and the judgment disappears.

Before I consult an AI model, I write down what I think. Not a polished position. The rough shape of my argument in my own words. When I skip that step, the machine leads. When I do it, the machine serves. Grinding through the edit, I broke the rule.

Make it a habit, and the debt compounds. Nataliya Kosmyna’s lab at MIT put what I felt to the test. Short-term speed comes at a longer-term cost; the capacity to think independently erodes. The same AI produced opposite effects based on how it’s used. People who thought first came out sharper, but people who let ChatGPT lead fell below baseline. Below even those who never used it. Kosmyna calls it cognitive debt. The speed costs you the capacity to judge what came back.

After draft #4, I stepped away from the machine and wrote the worst version of what I believed. Clumsy and repetitive, but mine. The energy it took to start over was the point. I brought Claude Code back in, on my terms. It would be easier to write without AI. I stay in because you can only learn what the machine does to you by thinking inside it. My wife read the next draft. The writer she knew was slowly working his way back.

· · ·

When I hit a wall, I look to those who think deeply about the challenge. In 1948, Norbert Wiener founded cybernetics: the study of communication and control between humans and machines. The Macy Conferences had asked the question I was now living: when humans and machines form a system, who controls whom?

Wiener drew the word from the Greek kybernan. To steer. He spent the rest of his life warning that most people wouldn’t, in his book The Human Use of Human Beings.

The person orchestrating AI systems corrects continuously. Your intent in, the machine’s output back, your judgment on what returned, then correction. Skip the judgment step and the machine is steering. This essay is a lesson that it’s easier said than done.

You can’t see a medium’s effects from inside it. McLuhan called it the anti-environment: something outside the system that makes the system visible. My wife was the anti-environment that Friday morning. She saw what Opus 4.5 and I couldn’t, because she wasn’t in the loop. Work that carries your name needs a human read from outside the process. Someone who can tell you whether the person on the page is still you.

Wiener understood machines. What kept him up was humans. He predicted that without intervention, we would be reduced to giving machines instructions. When asked whether machines could take total control, he said yes. But only through laziness or cowardice: us refusing to understand the machines ourselves.

What I’ve described is a new grammar. It goes beyond anything in a corporate training or college courses on AI. A literacy of selfhood inside machine systems. The ability to know when you’re still steering. Writing is where I feel it. But everyone working inside AI systems faces the same problem: knowing whether the thinking is still theirs. One person can learn to steer. The question is what happens when the whole organization needs to.

It’s late Sunday night. I’ve been inside Claude Code all week, and I think it’s finally the piece I wanted to write. The test is in the morning. We’ll read it again over coffee. She won’t need to say the word. I’ll stay on it.

What we’re working on isn’t an essay. It’s a mirror.

· · ·

Part 2 of "Three Essays on AI and People" · Read all three →

Next: Human OS for AI — The overlooked bottleneck in AI isn't technology. It's the people who have to use it.


Where does your organization stand? Take the Human Readiness Assessment →

Subscribe for weekly intelligence on AI and human readiness.

Share this essay →

Chris Perry · Andus Labs · 339 sources · 300+ diagnostics · 30 years in the field