The First 90 Days as a Director of AI: A Playbook from the Other Side
The first week in a new AI leadership role has a specific feeling to it. You can sense what the organization expects: something impressive, fast. There's an unspoken pressure to justify the hire quickly, and AI roles carry extra weight because the assumption is that if you know AI, results should follow immediately.
That pressure is understandable. It's also the first thing you have to set aside.
I've been through this transition, and what I want to share isn't a highlight reel. It's the actual order of operations: what to do first, what to avoid, and where most new AI leaders get tripped up before they've really started.
The Trap That Gets Many
New AI directors get hired to ship AI. The role has a mandate, and the default instinct is to act on that mandate right away. Launch a pilot. Build an agent. Show something.
Here's the problem: you don't know enough yet.
The organizations that struggle most with AI didn't fail because they were slow. They failed because they moved before they understood what they were actually working with. If your processes are broken before AI arrives, automating them doesn't fix anything. It just breaks things faster and at greater scale.
So the first job is diagnosis. And diagnosis feels slow when the organization is expecting movement.
Your first job isn't to ship AI. It's to find out what actually needs it.
The First 30 Days: Understand What You Walked Into
Start with what's already running. In most organizations, by the time someone hires a Director of AI, teams have already been doing AI on their own. Research from MIT confirms this pattern: shadow AI usage is the industry norm. People are using personal accounts, free tools, whatever they found that worked, often with company data flowing through it and no one watching.
That was true in my situation too. The audit of what was already happening was one of the most important things I did in the first 30 days. Not to police it, but to understand it. Where was the real usage? What was working? What was risky? What were people actually trying to solve?
From there, I mapped the stakeholder landscape. Who were the early adopters already building things on their own? Who was skeptical and why? Where were the champions in IT who had platform knowledge I didn't yet have? This is where a lot of AI leaders miscalculate. They arrive with a toolset and a plan and start pushing. The better move is to ask questions first, especially of the people who have been doing this work longer than you have.
Being a Director of AI is not being a Dictator of AI. The teams around you have context, relationships, and ideas that you need. Go in as a collaborator.
Get Governance Right, But Don't Rush It
Shadow AI creates a real risk. Personal accounts running on company data is an IP exposure problem, a compliance problem, and potentially a legal problem. You need governance, and you need it reasonably fast.
The mistake I see is going too fast. Someone drafts a policy in isolation, sends it out, and creates a compliance document that nobody follows and Legal didn't actually sign off on. That's worse than having nothing because it creates a false sense of security.
The right approach is cross-functional from the start. Bring in Legal. Bring in IT. Understand what enterprise platforms already exist, which ones the IT team has confidence in, and what the realistic enforcement mechanisms are. Draft something together rather than presenting something finished.
It takes a few more weeks this way. That time is worth it. The output is a framework people actually use, on platforms that are actually secure, with accountability that's actually distributed across the right functions.
Start Training Early, Even When It's Imperfect
Within the first month, start training. Don't wait until you have it perfect.
I built a three-tier structure: foundational concepts for everyone, intermediate application for teams actively using AI, and advanced work for the people who were ready to build. The early sessions were not my best work. You're still learning the organization, still calibrating what each team actually needs, still figuring out where the gaps really are versus where you assumed they'd be.
That's fine. Start anyway.
What those early sessions accomplish isn't always technical. They communicate something more important: that you're willing to spend your time on the team, that you'll be available, that this isn't just a platform rollout where tools appear and everyone figures it out themselves. Trust gets built through consistent presence, not through a perfect launch event.
The teams that started those sessions with skepticism tend to come back with their own project ideas within a few weeks. The training creates momentum. It doesn't have to be flawless to do that.
Cut Costs Before You Spend
This one doesn't show up in the LinkedIn posts about AI leadership, but it matters more than most of what does.
In the first 30 days, audit the vendor landscape. Go line by line through what the organization is paying for. What was purchased and is sitting unused? What is being renewed out of habit? What contracts can you exit? Are there vendors promising AI capabilities that aren't delivering on them?
This is not the exciting part of the role. It also sends a clear signal to leadership that you're looking at expenses, not just capabilities. In an environment where AI projects are competing for budget, demonstrating that you can find and eliminate waste creates credibility and real resources for the work that actually matters.
Look carefully at every contract before you assume renewal is the only option. There is often more flexibility than the default renewal path suggests.
Days 30 to 90: Find Wins That Aren't About You
Once you understand the landscape, look for two or three projects where AI creates a clear, visible win for someone else. Not a showcase for your skills. Not a demonstration of what the technology can do. A genuine removal of friction from a colleague's actual workday.
Here is the part most people miss: the best quick wins come from your existing expertise, not your AI knowledge.
I have over 15 years of digital marketing experience. So when I was looking for early wins, the most natural place to look was the marketing team. I understood their problems before they explained them. I knew which workflows were painful, which tasks were repetitive, and where the leverage points were.
We built a set of agents together: one that generates ADA-compliant alt text for product images, one that converts Word documents into clean HTML for the website, a technical SEO tool, a schema markup generator, and more. None of these required me to learn an unfamiliar domain. They required me to take expertise I already had and encode it into tools the team could run themselves.
That's the playbook. Start in territory you already know. The win is faster, the output is better because you understand the quality bar, and the relationship is easier to build because you're speaking the team's language from day one.
What those projects also gave me was runway. Credibility earned in familiar territory buys you permission to work on the harder, less familiar projects that come later. The big platform evaluations, the governance frameworks across multiple regions, the customer service AI work that involves stakeholders in other countries. Those came after. They came easier because of what came first.
Every quick win should have a clear before and after. Not in theory but in practice. If you can't articulate the specific impact to a non-technical stakeholder in two sentences, it probably isn't the right project for this phase.
What I Would Do Differently
Delegate more. That's the honest answer.
My instinct in a new role is to do things myself. I can move fast, I know the tools, and waiting for someone else to learn feels slow when there's pressure to deliver. That instinct is mostly wrong.
When you do everything yourself, you create a ceiling. You also rob the people around you of the opportunity to develop. The agent you built in an afternoon could have been a learning experience for someone on your team. The analysis you ran in isolation could have involved a teammate who now knows how to do it the next time.
There is a version of AI leadership where the director becomes the AI for the organization, handling every request personally. That's not leadership. It's a different kind of bottleneck.
Guide, don't do. Trust the people around you with real work. Let them hit walls and figure things out. That's where the organizational capability actually grows, and it's where your team members grow into the people who will carry this work forward long after the first 90 days.
What the 91st Day Is Really About
The end of the first 90 days isn't a milestone. It's a checkpoint.
The question isn't "what did I ship?" It's "what is still running?" A governance framework people are actually using. A training momentum that has carried into teams you didn't directly work with. A quick win that someone mentions in a passing conversation as something that changed how they work. A cost line that got cleaned up.
None of that is dramatic. All of it is evidence.
The first 90 days don't transform an organization. They earn permission to.
Build the foundation. Then ask for the keys.