The AI Paradoxes No One Is Talking About

AI

The main reason I chose the Bauer College of Business for my MBA was a Google competition.

I kept seeing the University of Houston show up in the annual Google Online Marketing Challenge rankings, award after award. So I applied, got in, and landed on a strong team. We spent three weeks running Google Ads for a non-profit. Not casually. We cared enough to split the days into 6-hour shifts so someone was watching the campaigns around the clock. We tweaked copy, adjusted bid strategy, paused underperformers, and pushed hard on timing.

The result: we placed 3rd in the world out of thousands of teams.

What stuck with me wasn't the award. It was the feeling of pulling a lever, adjusting a variable, and watching conversions respond in real time. Human skill, directly connected to outcome. It was exhilarating.

That was 2015. I didn't know it then, but that experience planted the seed for how I'd eventually think about AI and where it creates a paradox.

Paradox 1: The PMax Problem

A few years ago, I was managing a talented performance marketer and she walked me into a realization I've been thinking about ever since.

She was explaining Google Performance Max, or PMax, around the time ChatGPT was becoming a household name. PMax is Google's AI-powered campaign type that handles bidding, budget optimization, audience targeting, creative generation, and attribution automatically. Google's own description is direct: "Google AI is used across bidding, budget optimization, audiences, creatives, attribution, and more."

So I asked the obvious question: if AI is doing all of that, what's left for the marketer to do?

We both went quiet for a second.

Here's the paradox: PMax is designed to maximize campaign performance. The problem is that it does this for everyone. When two competitors in the same industry with similar products both run PMax, they're essentially pointing the same AI at each other. The targeting is optimized. The creative is generated. The bidding is automated.

At that point, what separates them?

Budget. That's it.

The skills that used to create real separation, sharp copywriting, precise audience segmentation, bid strategy, creative testing, have been absorbed into the platform. PMax wasn't designed to give you an edge. It was designed to extract more spend efficiently. The better AI gets at running ads, the more every advertiser converges toward the same output.

Compare that to 2015, when my team was manually pulling levers at 2am to edge past a competitor. The skill differential was real and it showed up in results.

That differential is mostly gone now. Which leads directly to the next paradox.

Paradox 2: The Content Treadmill

This one I see constantly.

A marketer asks AI to "write a blog post about X." The AI produces something coherent, well-structured, and grammatically clean. They publish it. They do it again next week. And the week after that.

Here's what's actually happening underneath that workflow.

AI generates content by finding patterns from content that already exists. When you ask it to write about a topic, it produces a statistical remix of what's already been written, a degraded copy of the authoritative sources it learned from. The more AI-generated content floods the web, the more future AI outputs are trained on AI-generated content, which is itself derivative.

The paradox: the faster you produce AI content, the less valuable each piece becomes.

This isn't an argument against using AI for content. It's an argument against outsourcing your thinking to it. Google's methodology is straightforward: they reward original, helpful content that demonstrates real experience and perspective. AI that scrapes and recombines existing content cannot produce that by definition. The novel insight isn't in the training data.

What actually works is using AI for the mechanical 80%, structure, grammar, research, formatting, while you inject the part that only you have: your lived experience, your data, your opinion, your framework. That's what earns authority. That's what ranks.

I've written about this in the context of the 10-80-10 model: human input, AI execution, human refinement. The content version of this isn't optional. If you skip the human input layer, you're not producing content. You're producing noise at scale.

Paradox 3: AI Needs Human Failure

AI is trained on what already works. The breakthroughs come from what doesn't.

That's the tension, and it's one most people skip past.

AI is extraordinary at synthesizing existing knowledge. PhD-level research, complex math, pattern recognition across massive datasets. The capabilities are real. But all of that is built on analyzing what already exists. It has no access to the insights that haven't happened yet.

Those insights tend to come from failure.

Early in my career, before any of today's AI tools existed, I was doing branding work and designing logos in Adobe Illustrator. Steep learning curve. There were sessions where I'd be deep into a custom logo, hit a wall, and just start clicking out of frustration, no plan, no structure, just rage-designing my way through the canvas. Then I'd stop. And occasionally, what was on the screen actually looked good. Better than what I'd been carefully planning.

The same thing happened when I was learning to code websites. A missing semicolon or unclosed tag would break everything, and hunting it down taught me to read code differently than any tutorial would have. Or learning Google Analytics 15 years ago by clicking every link in the backend without a specific goal, not really succeeding at anything, but building a mental map of the system that stayed with me for years.

None of that knowledge made it into a manual. It couldn't. It was built through failure, and it eventually became the foundation for how I structured workflows and agents.

That's what AI can't replicate. AI is trained on best practices. Best practices are what everyone already knows. The breakthroughs happen at the edges where best practices fail, and humans are the ones who live at those edges.

Watch a child learn to walk. They fall constantly, and through falling they develop balance in a way no instruction could produce. The failure is the curriculum.

Business works the same way. You see a problem differently after you've failed at it. Then you can feed those hard-won lessons into AI, and that's where real leverage starts. AI executes known patterns at speed and scale. Humans generate new patterns, usually by getting something wrong first. You need both.

AI needs humans as much as we need AI. Just not in the ways most people frame it.

What to Do With This

None of these paradoxes are arguments against using AI. I'm the last person making that case. I've built my career on deploying AI systems that produce real results: faster reports, better research, more consistent output.

But I've also watched people sprint toward AI tools without understanding what the tools actually do. They automate processes that were already broken. They generate content that competes against better content. They run PMax and wonder why their competitors are keeping pace.

The people who win aren't the ones who use AI the most. They're the ones who understand where human input is irreplaceable, and they protect that space deliberately.

Use AI for the 80% it does well. Own the 10% at the front (context, data, expertise) and the 10% at the back (QA, judgment, refinement). Know what you're handing off and what you're keeping.

The skill isn't prompting. It's knowing when to prompt and when to think.

Next
Next

How I Became "The AI Guy" (And How You Can Too)