The 10-80-10 Rule for AI Implementation

AI

I recently finished Ethan Mollick's Co-Intelligence. What stuck with me wasn't a new concept, but a term. Just the phrase "co-intelligence" gave me a better way to explain the relationship between humans and AI.

That's what good frameworks do. They give you language for things you already know but struggle to articulate.

This blog is my attempt at the same thing: a simple framework for how I approach every AI project.

The 10-80-10 Framework

I explain my AI projects as a 10-80-10 framework. Here's what I mean:

  • The first 10% of the work is done by me

  • The middle 80% of the work is done by AI

  • The last 10% of the work is done by me (and many times I have to loop back around)

Some people call this the "human in the loop" method. For now, it's the ideal approach when working with AI.

There are practical reasons why this method matters. Let's touch on those and then look at a real-world example.

Practicality Is a Skill

Sometimes people get so caught up in AI, tech, and new trends that they lose track of practical solutions. When it comes to AI, you have to sit back and think about how these systems actually work.

AI is trained on trillions of data points from the open web. When a user submits a prompt, the AI provides a response based on mathematical probability from its training data.

An easy example: if you ask AI to complete the sentence "the one ring to rule them..." you will almost certainly get "all" as it references The Lord of the Rings. It's mathematical probability.

Let's go deeper. You have engineers in tech hubs, many in Silicon Valley, training these tools on mathematical probabilities. Their biases come through. I don't need to cite this; just look it up. What that means is that "best practices" (as defined by how someone in Silicon Valley would complete a task) get baked into AI platforms.

I've run into this issue numerous times. I want to work on a strategy for my job and the answers I get read like a business school textbook. They lack the internal understanding of how my workplace actually operates. Most companies have their own way of doing things, and it rarely matches textbook "best practice."

This creates a major issue when people use AI and rely on the output to create an actionable plan.

This is where that initial 10% comes into play.

The First 10%

Simply put, I do the first 10% of the work: context engineering, gathering internal data, and creating a detailed initial prompt explaining the full situation and my desired outcome.

It sounds simple, but that first 10% can make or break your AI workflows.

I want my AI projects running on real internal data, not web search results. I want to control the narrative of where we're going. Is this going to be an interactive Claude artifact? A Claude or GPT project? A Copilot agent? A persistent thread? Setting AI up for success at the beginning ensures I am pushing the direction, not letting AI manipulate my decisions.

After you collect the data, engineer your prompt, and set clear guidelines, you let the AI do its thing.

The Middle 80%

This is where AI shines.

You might be asking yourself: didn't I just spend hours collecting data and another hour writing a 100-line prompt? How was that only 10%?

Let's be practical. If you truly invest in that first 10% and make it count, then the next 80% is work that would likely take a high-functioning professional days or weeks to complete. I'm talking full strategies, data analytics, coding, reviewing mountains of data, and more.

This creates a wonderful paradox. I say wonderful because tech paradoxes are fun brain exercises.

The work you do is the first and last 10% while AI does the middle 80%. But that's in terms of output. The time it takes could be completely switched. It might take you 80% of the overall time to collect the data and get the project going. That's the strange beauty of AI.

The Final 10%

With the 80% done, you have to be the final 10% QA. This is non-negotiable and so easy to bypass.

The AI output sounds good. Really good. But is it correct for your needs?

Don't let the fancy formatting and proper grammar blind you to problems. You have to QA. And as I mentioned, many times you'll re-loop the process. This is how impactful projects actually get completed.

A Real Example

Let me show you how this works in practice. I can't share specifics, but I can walk through the framework.

I was asked to produce a report with less than 24 hours notice. It would be presented to the executive team. This is real and happened in February 2026.

I found out on a Sunday, meaning it was needed by end of day Monday.

First thing Monday morning, I called my team to see if they had access to the raw data. They did not. We realized there was a data integrity problem and I started to scramble. How could we process a massive dataset in this short of time knowing the baseline data was compromised?

I thought for a minute. We decided to take two approaches with me leading one and another person on the team leading another. We agreed to reconnect in four hours with progress.

I went to my data source and exported a large dataset. For perspective, due to export size limits, I had to separate the data into 27 spreadsheets.

Once exported, I wrote detailed instructions for Claude with the main goal of analyzing the data. I also wanted this to be a reusable Claude project. Claude helped write the project instructions. I created the project and uploaded all 27 sheets.

The first 10% was done.

Then I had it analyze the data per the instructions. I used Opus with extended thinking. The initial analysis took 20 minutes. At many points I was concerned I was going to hit my usage limit. Thankfully I have a high tier and made it through.

There it was. The initial 80% done. And it was beautiful.

The analysis was segmented with a combined view and data notes. I estimated it would have taken me 100 hours to complete the analysis manually.

Next was the final 10%. Here's the dirty truth: it took me 2 hours to QA. I kept finding inconsistencies between the original data source and the Claude output. We worked through 9 iterations during that 2-hour session.

But it did stick. I sat there for a moment, completely confident in the data.

I took my dogs on a walk and came back to double-check some items. I pulled the team back together and found out my teammate was still in data-cleaning purgatory. He knew what he wanted to do but was still setting up the data to be processed. I told him to stop. We had the solution.

The report was sent off with time to spare. I was recognized by management.

I mention that not for congratulations, but to show that the 10-80-10 approach works. I frequently see "AI personas" talk about hypothetical situations. The above was real, and I simply followed the advice I give to others.

Why This Works

You can call this whatever you want. Co-intelligence. Force multiplier. Amplifying your skills. The language matters less than the principle:

AI in its current state works best when it amplifies human expertise, not replaces it.

Do the first 10%. Let AI do the 80%. Own the final 10%.

That's the whole framework.

Next
Next

AI Isn't Working Because Your Processes Are Broken