• The Blueprint
  • Posts
  • You're (Probably) Giving AI Too Many Instructions

You're (Probably) Giving AI Too Many Instructions

Why shifting from procedural to declarative thinking unlocks AI's potential.

Hey!

Chris here. Welcome to Blueprint—the newsletter to help you build a winning engineering team.

I've been doing a lot of interactive coding sessions with clients lately, and the same pattern shows up almost every time.

People see what's possible—watching me build features in real time—and their first instinct is to break down what needs to be done into steps.

Do this, then do this, then do this.

But that's exactly why automation breaks. You're trying to script every possible path when you should be declaring the destination.

Here's the better approach. 👇️ 

📒 DEEP DIVE

Why You Need to Stop Micromanaging AI

Your hyper-detailed prompts are causing worse outputs. Here's what you should be doing instead.

The Problem with How We Naturally Think

I’ve built software systems for people for years. And I cannot tell you how frequently my non-technical clients try to describe how to get something done.

I have to stop them and say, "Don't worry about any of that."

Because you'll only mention what you already know. There's a whole body of knowledge—tools, practices, approaches—that you wouldn't know to use.

Instead, just tell me what the end looks like. What does the world look like when this is working?

Humans naturally use whatever knowledge they have to describe how. But that assumes we know the best way to get there.

We think step-by-step. AI doesn't need to.

The Shift from "How" to "What"

When you think about deployments—getting a website live so people can access it—most people describe the process:

"Take the code, build it, upload it to the server, configure the environment, restart the service."

That's procedural thinking. You're telling the system how to do the work.

But here's a better way—describe what done looks like:

"When I pull up the website in my browser and log in, this section works. I can see the dashboard. The search feature returns results."

That's declarative. You're defining the end state.

The difference matters because the first approach assumes you know the best way. The second lets the system use its knowledge with modern practices and tools to get there.

State Machines Make This Work

There's a computer science concept called a state machine. The idea is simple: things exist in various states, and the system's job is to advance into whatever state you ask for.

When you declare what "done" looks like, you're defining a state. The agent keeps working until the system is in that state.

This is why agents can run for 30 hours on a single task. They're only "done" when they verify the state has been achieved.

When I give Claude a complex task, I describe the end state in my prompt: "Here's what the world looks like when this is working correctly."

Then I'm very thorough about how to test whether we've reached that state. Can you log in? Does the feature work? Do the tests pass?

The agent will just keep going until those conditions are met. It won't stop. It'll just keep going because it's not yet in the state that it needs to be in.

That's when it'll crank 8, 10, 20 hours on a problem. It's honestly crazy.

How You Know You're Done

The key here is that end states must be verifiable.

When I deploy a website, the verification isn't "Did the deployment script finish?" The verification is, "When I log in, does this section work?"

That's how you know the state has been achieved.

This forces you to think differently about success criteria. You stop trusting execution and start testing outcomes.

And if you surround that with clarity—if you give it the tools it needs and tell it exactly how you know you're done—it will use whatever it has to get there.

Why This Enables Autonomy

Agents don't need constant intervention when they know 2 things:

  1. What the goal is

  2. How to check if they've achieved it

That's why they can keep going independently. They have clarity on the destination and a way to verify they've arrived.

This isn't magic or self-improvement. It's just clear goal-setting combined with verification.

BEFORE YOU GO…

The next time you prompt an agent, ask yourself: Did I describe the steps, or did I describe the finish line?

Because better results don't come from better instructions. They're the outcome of clearer definitions of success.

So stop telling the system how to do the work and trust that it'll find the right path.

Talk soon,

Chris.