Why Most Businesses Fail to Get Value from AI (and How to Fix It)

AI

Over the past 20 years, I’ve had multiple jobs that tend to spark strong opinions.

In my early twenties, I was a personal trainer. When people found out, they’d want to talk about fitness—which I genuinely enjoyed. Then there were others who insisted it was impossible to get in shape, as if fitness hadn’t been studied to death. In truth, it comes down to the basics: managing calories, exercising, sleeping well, supplementing intelligently (not excessively), and reducing or eliminating alcohol and smoking. (Yes, some people have medical challenges with weight loss. But in seven years of training, I had one verified case—everyone else who said “it’s impossible” simply wasn’t doing what was needed.)

In my mid-twenties, I split time between teaching at the University of Houston and working for a non-profit, Coreluv, that builds and operates orphanages in Haiti. I also taught an experimental college-level course to high school students from an under-resourced neighborhood. These were some of my most fulfilling years. My research focused on education reform, and, like in fitness, I had great conversations—and plenty of people who dismissed my lived experience with a single anecdote. I’d just smile and nod.

While at Coreluv, I became their Director of Marketing, which led to nearly two decades in marketing and digital strategy—running my own consulting agency along the way. Over the years, I’ve done almost everything: live events, web design, paid media, technical SEO, content marketing, analytics, email, influencer programs, branding, and more. And of course, marketing is the profession where everyone outside of marketing thinks they know how to do your job better. The old saying goes, “Marketing is the only department where everyone else’s opinion automatically qualifies as expertise.”

Now I work in applied AI—and the pattern continues. When people hear what I do, they have opinions. After two decades of being told how to do my job—whether as a trainer, teacher, or marketer—I’ve learned to nod and listen.

Why share all this?

Because you’ve probably had the same experience. You have areas of expertise—professional, athletic, or personal—and when people find out, they feel compelled to share their opinions. That shared experience connects us.

So, with that context, let’s talk about why AI isn’t providing value at your business.

The reasons usually fall into three categories:

  1. Security

  2. Data

  3. Search-Engine Thinking

1. Security

Even small businesses need secure AI environments. I’m still surprised at how often I see people input customer identifiers or financial data into personal ChatGPT accounts. You wouldn’t email your company’s P&L from a personal Gmail account—so why would you share sensitive data with a public LLM?

Your business needs an enterprise-level AI platform—Copilot, GPT Enterprise, Claude Team, Gemini for Workspace, or others. Yes, enterprise systems still carry some risk, but the goal is mitigation. Personal accounts multiply risk; enterprise platforms contain and control it. It’s that simple.

2. Data

Sometime early this year, I realized that 90%+ of my AI prompts involved real data.

Looking at my five most recent threads in Claude (my preferred platform at Zinus):

  1. Analyzing customer support data.

  2. Reviewing a contract and requesting citations for specific clauses.

  3. Structuring a spreadsheet from a written idea.

  4. Searching public documents for sub-data sections.

  5. Reviewing vendor performance from their latest report.

Every example relied on real data. And that’s just the surface. Many of my larger AI projects—like orchestrator agents with sub-agents—depend on structured internal data and workflows.

Why does this matter? LLMs (Large Language Models) work by recognizing and predicting language patterns based on enormous datasets. When you feed them your data—context, formats, documents, goals—you’re narrowing the model’s focus to your world. That’s when it moves from generic to intelligent. Without real data, it’s like asking a stranger to finish your sentence—they’ll get it grammatically right, but contextually wrong.

That’s why asking AI for a “marketing plan” or “business strategy” with no real data leads to underwhelming results and memes about how “AI can’t do anything valuable.”

3. Search-Engine Thinking

Search engines have trained us to use AI incorrectly.

In almost every workshop I lead, people start by prompting AI the same way they’d use Google: Ask a question, get an answer, move on.

That’s not how LLMs work.

Search engines surface existing information—you validate it. LLMs generate new information—you guide it.

Generative AI is powerful precisely because it can create context, but that’s also its flaw—it will always generate something, even when wrong.

Technically, you can use a screwdriver as a hammer, but it’s not the right tool. Likewise, using ChatGPT as a search engine—especially for business decisions—doesn’t work. You’ll get confident, plausible nonsense.

To be clear: LLMs can function as search engines for generic lookups. I do it all the time. Yesterday, I described a long-forgotten gym attachment from 20 years ago, and within two prompts, ChatGPT found the exact product on Amazon. But that’s consumer-level use. For business strategy, it fails. Your company’s data, processes, and constraints aren’t in the public web training data, well, I hope not :). Without those inputs, the model can’t reason accurately about your environment.

Actionable Takeaways

Security: Implement an enterprise-level AI platform. Set governance rules. Train employees on what not to input.

Data: Feed your LLMs real data—customer records, SOPs, metrics, contracts. The more grounded your inputs, the more valuable your outputs.

Search Engines: Stop prompting like you’re Googling. Instead, collaborate with AI—add context, data, and goals to every conversation.

Final Note

This article was written using the same principle I just described—real data.

Early this morning, my son saw me typing in VS Code and asked, “Is that code?” I told him, “Not exactly—it’s my version of it.”

Here’s my process:

  • I draft in VS Code using a mix of plain text, Markdown, and HTML—my shorthand for thinking fast.

  • Then I feed that text into a custom GPT designed to refine writing without changing my voice.

  • The GPT edits grammar, structure, and flow—and I review and edit the result (I made multiple edits to the output above).

That’s what “AI enablement” looks like. AI isn’t replacing my work—it’s enhancing it.

That’s what every business should be doing: using AI to augment capability, not outsource thinking.

There will be winners in the AI race. The question is—will your business be one of them?

Previous
Previous

LLM SEO: Become the Answer, Not the Echo

Next
Next

The Secret to Building Better AI Agents: Let the LLM Build Itself