- The Blueprint
- Posts
- The Exact Walkthrough I Use When Coaching Executives on AI
The Exact Walkthrough I Use When Coaching Executives on AI
Shifting your mental model can turn generic outputs into scalable expertise.
Hey!
Chris here. Welcome to Blueprint—the newsletter to help you build a winning engineering team.
I've been doing a lot of 1-on-1 AI coaching with founders, CEOs, and executives recently. These are people who've heard about this technology for 2+ years but can't figure out why they keep getting mediocre results.
The sessions almost always start the same way: they tell me how they've "tried" AI, AKA they opened ChatGPT, typed in something strategic, got back an answer so generic it was essentially useless, and landed somewhere between "AI is overhyped" and "I'm just not the type of person who's going to get this."
I push back on both. Hard.
The problem is they're using it with a mental model that's completely wrong. And until that gets fixed, they'll keep getting the same output, blaming the tool, and missing out on all its potential.
So here's what I walk them through to fix that. 👇️
đź“’ DEEP DIVE
The Exact Walkthrough I Use When Coaching Executives on AI
How I fix their mental models and reveal where the real leverage lies.

The first thing I do is paint a picture. I'll literally talk them through what ChatGPT looks like. Because once you have in your head that there's a big text area where you type something in and it comes back with an answer, you have somewhere to start.
People can grasp that. They've used a chat. They've done a Google search. So we start there.
And then I explain that everything they think they know about what happens next is (probably) wrong.
How LLMs Actually Work
When you send your first prompt, it goes to the model, and you get an answer back. That's intuitive.
But the second time you send something, it doesn't send just that prompt. That LLM has total amnesia. It has no idea what happened before.
So what gets sent is the entire first prompt, the entire first response, and your new prompt. All of it, bundled together, every single time.
So with every exchange, the whole package grows. That's your context window.
And when the conversation gets long enough, the model can no longer hold it all coherently. So it condenses the older parts into the essence of what was discussed. The newest chats stay fresh and detailed, while the older ones get compressed out.
When I explain this, executives almost always stop and go, "Oh. I didn't know that." They assumed they were talking to a system with a running memory.
That correction alone has a huge impact on how they approach these tools.
But that's only the beginning.
You're Not Training the Model
Once the context window clicks, I move to dispelling another misconception I hear from every company I walk into.
The moment businesses get serious about AI, they say: "We need to train the model on our data."
Sounds great, in theory. But except for the actual frontier companies—OpenAI, Anthropic, Google—nobody is training models.
In fact, when you use an LLM as a business, you want your relationship structured so that your interactions cannot be used to train their models.
What you're actually doing to improve the outputs is changing what you put into the chat box. You’re feeding it better context.
You have to remember that these models have been trained on essentially all of written human knowledge. So if you ask a strategy question with 0 context, you get the average of all of it back in a gray, generic answer.
And while the best models now use a mixture of experts under the hood, where specialized sub-models handle different types of problems, the principle stands: empty context box, empty answer.
A lot of people hit that once, decide AI doesn't work, and walk away. The variable they never changed was the context.
When it has exactly the right background—right format, right amount, everything it needs and nothing it doesn't—and the output is exceptional. Phenomenally better, not a little better.
Simply put, the context you give these systems is the most important thing you do.
Tools, Tool Calls, and the Harness
Next, we get to what I think is the most misunderstood concept in this entire space.
When you work with an AI engine, you can tell it what tools are available. And when I ask executives how they picture this, they describe the same thing: the model goes out and runs those tools directly. That is...incorrect.
What's actually happening is closer to this:
You to LLM: "You have a browser you can use. Open Google with your browser."
LLM to browser: "Open Google."
As you can imagine, a browser has to exist and be available in the surrounding system to receive, execute, and feed the results back from that command.
That surrounding system is what we call the AI harness.
It might be Claude Code or an open-source equivalent, but the key point is that whoever controls the harness controls everything. Own it, and you can route tasks to different models depending on their strengths—all through a single environment you've built to match what you actually need.
It's at this point that I open my terminal and say: "Watch this."
The terminal on a Mac is far more powerful than people think. There's a command-line tool called Agent Browser. You type "agent browser open [URL]" and a browser opens.
What shocks people every time is that you don't have to explain the tool to the LLM. You just tell it to run "agent browser help" to understand it. The model runs that command, reads the output, figures out what's possible, and starts using the tool correctly—all on its own.
When you see that in real time, everything starts to snap into place. You've just opened Pandora's box for literally everything.
Where the Light Comes On
Once executives see that the model can self-orient around any tool just by reading its own documentation, the natural question becomes: "Okay, so what can you actually build with this?"
Take a software business. You give an agent read-only access to your database, and now it can query on your behalf. Instead of interrupting a data engineer every time you needed something, you just describe it to an agent, which runs the queries and delivers an answer 15 minutes later.
It just gets more exciting from there. Take support tickets as an example.
A ticket comes in, the agent pulls up the internal tracking system, reads the relevant code, checks the logs, and queries the database for customer context.
And because you've built a document describing your voice, your tone, the way you communicate—pulled from 8,000 past tickets, Slack messages, and emails—it composes the response in your voice, armed with everything it needs to answer well. The output is probably better than what you'd fire off when you're buried under 50 other things.
And here's the part that really lands: it scales. 1 ticket or 1000, the work on your end is the same.
Every new tool you build gets fed back into the system as a new capability, and the whole thing compounds like a snowball. It just keeps getting better.
That's the moment I'm building toward in every one of these sessions. When I describe it, executives almost always call BS. Fair enough. Then I run the whole interaction in front of them.
That's when they stop doubting and start asking how fast they can get this.
BEFORE YOU GO…
The executives who believe AI doesn't work (or, worse, doesn't work for them) have flawed mental models.
You have to stop blaming the tool for poor answers when you gave it nothing to work with.
There’s an incredible opportunity to build systems where your expertise and judgment run at a scale you never could alone.
But to get there, you have to build better context and own the harness.
And there's no better time to get started than now.
Talk soon,
Chris.