The Secret to Building Better AI Agents: Let the LLM Build Itself

AI

Most people try to build AI agents from scratch — sitting down in Copilot, GPT, or Claude and typing freehand instructions. That’s where most projects go wrong.

One of the biggest lessons I’ve learned after building dozens of agents is this:

The best way to build an agent isn’t to start from a blank page — it’s to have the platform build the instructions for you.

It sounds like something out of Inception — using an LLM to help create within itself — but it works remarkably well in practice.

Here’s the exact process I use:

1. Pick your platform

Decide where your agent will live — whether that’s Microsoft Copilot, OpenAI GPT, or Anthropic Claude. Each one structures agents a bit differently, but they all share a common pattern: instructions, context, and data sources.

2. Upload the platform’s own documentation

Before you start typing instructions, upload the official documentation from that platform directly into the LLM.

For reference:

This does two things:

  1. It gives the LLM a precise understanding of how its own environment works.

  2. It creates a feedback loop — the model is now using its own rules to guide its own creation.

3. Brain dump your intent

Next, explain — in plain language — what you want your agent to do. Don’t worry about formatting. Just describe the task, data sources, users, tone, and outcomes. Think of this as your functional spec.

Then ask the LLM to:

“Generate detailed system instructions for an agent that does this, using the uploaded documentation for structure and formatting.”

That’s where the magic happens. The LLM will produce logical, sequential instructions aligned with how it expects agents to be configured.

4. Create a guided feedback loop

Now, use the LLM as your build partner. Ask it:

  • “How should I configure each section?”

  • “What options are available in this platform’s interface?”

  • “Can you generate the JSON, YAML, or prompt schema this platform expects?”

Because it has the documentation context, the LLM can guide you step by step through the actual build process — often with better accuracy than trial-and-error guessing.

5. Iterate with judgment

Even though the process is recursive — the LLM is building within itself — your role as the human architect still matters. You’ll make small adjustments along the way based on business context, use cases, or workflow nuances. But instead of figuring out how to build, you’re now focused on what to build — the strategic layer.

6. Scale to complex systems

This same approach scales beautifully for multi-agent systems. Right now, I’m building an orchestrator agent with multiple sub-agents. Instead of reinventing the wheel, I uploaded the platform’s developer documentation to Claude and followed it sequentially while building in a second window. The result: faster builds, cleaner logic, and fewer broken configurations.

Why this works

You’re essentially using the LLM as both architect and builder. You supply the intent and guardrails. It supplies the syntax, logic, and platform-specific execution.

It’s not just efficient — it’s also the most natural way to use AI:

Using LLMs to build within LLMs.

Hope you enjoyed that, have fun building!

Previous
Previous

Why Most Businesses Fail to Get Value from AI (and How to Fix It)

Next
Next

AI Enablement: From Shadow Usage to Sustainable Deployment