AI Is Table Stakes. So Why Isn't Anyone Winning?
A few weeks ago, I had coffee with a senior executive. She told me a story I haven't forgotten.
She needed a presentation for the C-suite. In the past, that meant handing the work down to her team, a few days of back-and-forth, review cycles, revisions. Instead, she sat on her couch one evening with a show on in the background. MS Copilot pulled from her emails and recorded meetings, sequenced the narrative, built the deck. She reviewed it herself. A few casual hours. Done.
Her words, not mine: "Normally I would have sent that to my team."
I didn't know how to respond at first. Because I've been on both sides of that story.
The Real Productivity Question
On weekdays, I spend somewhere between 11 and 13 hours working in or with AI. I believe in it more than almost anyone I know. I train my team to use it. I've built systems with it that produce results I can measure in dollars and hours.
I'm also a father. And lately, those two things are in direct conflict as I question the future we are building.
The access question gets asked constantly: doesn't everyone have AI now? A mid-market company and a Fortune 500 can both open the same chat window tomorrow morning. So where's the edge?
A recent study by the National Bureau of Economic Research followed roughly 6,000 executives across the U.S., U.K., Germany, and Australia. Nearly 90% reported AI had no impact on employment or productivity over the last three years. Average usage: about 1.5 hours per week.
Economists are calling it a new version of Solow's productivity paradox, the same phenomenon from the 1980s when computers were everywhere but productivity gains weren't. "You can see the computer age everywhere but in the productivity statistics," Solow wrote in 1987. Swap "computer" for "AI" and you have 2025.
The headline version of this story is easy: AI overpromised. The more honest version is harder.
The Tools Are Equal. The People Are Not.
Here's what I've wrestled with almost daily: I can give you clear, anecdotal evidence that AI reduces time on tasks, reduces costs, and increases output. What I can't do is fully separate my results from the fact that I know how to use it.
AI is highly dependent on the individual. The tools are available to everyone. The skill to deploy them is not.
That's not an excuse. It's the diagnosis. The macro data is flat because most organizations are running powerful models on broken processes, through undertrained users, without any feedback loop to improve. A bad prompt yields a bad output, but that's the surface-level version of the problem. Go deeper: if your marketing strategy doesn't work now, AI won't fix it. If your team can't analyze data correctly now, AI won't fix it. What AI will do is take a process that works and scale it faster than any human could. But the process has to be there first.
This connects to something Michael Porter has argued for decades: the lower the barriers to entry, the higher the competition. AI has essentially shattered those barriers. Anyone can build, automate, or produce at a level that used to require significant infrastructure. That's not democratization. That's the highest competitive pressure any of us has operated under, layered on top of an already difficult economy.
The companies showing results aren't the ones who adopted AI fastest. They're the ones who used AI to scale work that was already functional.
What's Getting Lost in the Productivity Math
Back to the executive on the couch.
She got her presentation done. That's a real win. But she also used to hand that work to someone younger, someone who would have made mistakes, worked through revisions, learned how corporate communication works at the executive level. That learning didn't happen.
Good managers don't delegate to clear their own plates. They delegate because the doing is how people develop. The junior employee who builds the deck, gets it sent back with comments, rebuilds it, and finally sees it land with leadership. That person is learning something no AI output can replicate.
IBM recently announced it would triple its entry-level hiring for exactly this reason. Displacing junior roles with AI creates a leadership pipeline problem down the road, and their CHRO said as much publicly. It's one of the more honest structural admissions I've seen from a major company.
But most companies aren't making that call. Most managers are working under a load heavy enough that training the next generation feels like a luxury. I've heard it from multiple late-career leaders: the era of formal development paths, deliberate mentorship, and grooming people for promotion is largely gone. Most onboarding now is HR paperwork and a laptop. Then you're expected to perform.
Add AI into that environment and the junior pipeline doesn't just thin. It hollows out.
I keep telling myself AI will create a new economy and new jobs. History suggests it will. The IT boom eventually did. But I'd be lying if I said I wasn't concerned about what happens in the gap between now and then.
What Actually Differentiates Companies Right Now
Last week, two members of my team were promoted. Both are younger, early in their careers. A year ago, neither was using AI in any meaningful way. Now both are central to workflows we've built together, and those workflows are what made the promotions possible without leaving the rest of the team exposed.
That didn't happen because I handed them access to Claude. It happened because we worked together, experimented together, and built things that failed before they worked.
That's the answer I give to mid-market companies asking about the edge: it's not the tool. It's whether your organization has built the discipline to use the tool well.
In practice, that means two things most companies won't actually do.
First: fix the process before you automate it. Map your workflows. Find the gaps. Run it manually at least once. Understand what "good" looks like before you ask AI to produce it at scale. I've written about this before in the context of the 10-80-10 Model. If you skip the first 10%, the other 90% doesn't work.
Second: make learning structural, not optional. Not a purchased webinar. Not a taped training with zero connection to actual work. I'd argue teams should be spending 10 to 20 percent of their time experimenting with AI tools and building them into workflows. That sounds like a lot until you do the math on what a working AI system returns. The technology is that capable when deployed correctly, but it takes weeks or months of real testing to get there. Most corporate environments don't give people the space to breathe, experiment, and fail.
The companies that win won't be the ones who adopted AI fastest. They'll be the ones who built cultures where AI could be used well.
I hope that happens. I'm building toward it in my own work every day.
I'm just not confident it will happen widely.