AI Is Impressive. It Also Doesn't Know When Thanksgiving Is.

AI

Last year I was running a project through a Claude Project (similar to an "agent"): sequential steps, clear dependencies, the kind of workflow AI handles well. At one point, Claude flagged a critical action item: send an email to get buy-in from my manager the following Thursday.

The following Thursday was Thanksgiving.

I caught it immediately. Any human would. But Claude had no idea. It understood the process perfectly and had zero awareness of the world that process lives in. I had to coach it through something a first-week employee would have figured out on their own.

That moment stuck with me. Not because it was a failure, but because it was clarifying.

The Honest Version of AI Adoption

There's a version of AI content that gets a lot of attention right now. It's written by people who can't quantify their work. Big claims, vague outcomes, zero specifics. They're not wrong that AI is powerful. They're just not telling you the whole story.

The whole story is this: AI is simultaneously one of the most capable tools I've ever used and one of the most contextually blind.

Both things are true. You don't get to pick one.

The hype machine focuses entirely on the capability side, which is genuinely impressive. What it skips is the amount of human judgment required to keep AI operating in the real world. Not just at launch. Continuously.

The Contract Example

Just this week I was working through a vendor onboarding. Standard legal dance: they sent the contract, our legal made redlines, their legal accepted some and added comments back.

Claude assessed the situation and concluded we were done. Ready to move forward.

We were not done. The next step was to request a clean version of the contract, one without Word comments, because you cannot sign a document that still has tracked changes and legal commentary embedded in it. That's not a process gap. That's how contracts work. Every lawyer, every operations person, anyone who's been through a vendor agreement knows this without being told.

Claude didn't. I had to explain it.

And I did. And then it was fine. That's the part that matters.

What This Actually Means

The Thanksgiving email and the contract redlines aren't bugs to be fixed in the next model release. They're a category of limitation that isn't going away anytime soon: AI understands process but not context.

It can follow steps. It cannot independently understand the cultural calendar, the unwritten rules of your organization, the relationship dynamics between two parties in a negotiation, or the fact that nobody signs a contract with comments still in it.

That gap is where you live. That's the job.

This is why I keep coming back to the 10-80-10 model. Not as a philosophical framework, but as a practical operating approach. The first and last 10% is human work, and the final 10% in particular is non-negotiable. You are the quality layer. You are the context layer. You are the person who knows it's Thanksgiving.

AI doesn't fail because the technology is bad. It fails because people deploy it without accounting for everything the technology doesn't know.

What Prompting Guides Won't Teach You

Most AI content focuses on prompting: how to write better inputs to get better outputs. That's useful, but it's not the ceiling.

The actual ceiling is knowing when to intervene. Recognizing the moment AI has confidently done something technically correct and practically wrong. Catching it before it becomes someone else's problem. Coaching the model the way you'd coach a sharp new hire who doesn't yet know how your industry works.

That skill is built from experience. You can't shortcut it with better prompts. You develop it by deploying AI in real environments, watching it operate, and paying attention to where the edges are.

A year ago, some of those moments would have slipped past me. They don't anymore, and the Thanksgiving catch is proof of that, not a footnote to it.

That's the honest timeline. That's the part that doesn't make it into the highlight reels.

Cut Through the Noise

If you're evaluating AI for your organization, or evaluating whether the person pitching AI to you knows what they're talking about, ask one question:

Can they tell you specifically where AI has failed them, and what they did about it?

Anyone who's actually deployed AI in a production environment has a list. If they don't, they haven't deployed anything. They've experimented. They've prompted. They haven't shipped.

AI is worth the investment. The outcomes are real. I've seen 40-hour workflows become 2, research that took 16 hours get done in 15 minutes, and reporting that used to consume two full workdays now running in the background while we focus on the decisions that actually matter.

But none of that happened without the human layer catching what the AI couldn't see.

Both things are true. Build accordingly.

Previous
Previous

Before You Build the AI, Build the Brain

Next
Next

She Built a Video Game