- The Blueprint
- Posts
- The Lowest-Risk Way to Start Using AI in Your Company
The Lowest-Risk Way to Start Using AI in Your Company
Before you automate anything, do this first.
Hey!
Chris here. Welcome to Blueprint—the newsletter to help you build a winning engineering team.
I can guarantee something without knowing a single thing about you or your company:
You have a documentation problem.
You just wrapped up a big documentation initiative? You still have a documentation problem. Because the moment that project finished, every piece of it started going stale.
That's not a dig. It's just the reality of building software and running complex systems over time. Documentation rots. Code evolves. And the people who understood why something was built the way it was inevitably move on.
That’s a liability sitting right in your codebase.
Fortunately, AI tools have made solving this unbelievably easier. Let me show you what I mean. 👇️
📒 DEEP DIVE
The Lowest-Risk Way to Start Using AI in Your Company
How good documentation becomes the context layer everything else depends on.

When companies start thinking about AI, the first instinct is usually to jump to the flashy stuff:
Autonomous coding
Automated workflows
AI-generated features
A complete overhaul of how the engineering team operates
Those things have real promise. But for most companies, that is too aggressive, too early.
The reason is that the tools are only as useful as the context you give them. And if your company's knowledge is locked inside people's heads, buried in stale files, or never written down to begin with—the AI is starting from 0 every time.
You have to make your company legible before you can make it autonomous.
Why Documentation Is the Right First Move
Every large company, and plenty of mid-sized ones, have accumulated layers of complexity over time. Often, there are entire corners of the codebase that no one fully understands anymore.
AI is exceptionally good at exploring those systems. It can take a block of code that nobody on your team wants to touch, dig into it, and produce documentation that gives everyone a clear picture of how it actually works.
What used to require months of knowledge transfer—or just never happened at all—can now actually get done.
But here's the part that matters strategically: documentation is not just for your people. The files you build around your systems become context for your AI agents. And the better that context is, the more accurately and safely those agents can operate.
So every good documentation decision you make today compounds into better AI performance tomorrow.
Getting this right is not complicated. It just requires doing it in the right order.
The Practical Playbook
Step 1: Start small, and start at the top
Do not roll this out company-wide on day one. Pick a small group of people and start there. And make sure leadership—the CTO, CEO, or whoever is really driving this—takes the first real swing at it themselves.
There is a cultural component here that cannot be skipped. Your team is watching to understand whether this is something they should embrace or be afraid of. If leadership is in the trenches doing it, it normalizes everything.
As for who on your team will actually drive it, you will know very quickly. Some people will treat it like homework. Others will be doing it on nights and weekends because they are genuinely fascinated and cannot stop. Find those people. They are your internal champions.
Step 2: Audit the repo
Pull down the codebase and open up your AI agent—Claude Code is the obvious choice right now. Do a basic audit. Does a README.md file even exist? You would be surprised how many times we crack open a codebase and there is nothing at the top level. If that's true for you, get that document right fast.
From there, build a docs directory inside the repo and ask the agent to help you think through what top-level documentation this codebase actually needs. Let it explore and come back with a structure. Then start filling it in.
Step 3: Document the scary stuff
Every company has that one section of code nobody wants to touch because they are not sure what will break if they do, and nobody can fully explain why it works the way it does. That is exactly where you start.
Get the agent to go in there, understand it, and document every angle of it until someone on your team can explain it clearly.
Step 4: Set up your agent guidance files
There is a file called AGENTS.md—or CLAUDE.md if you are using Claude—that gets loaded into the agent's context every single time a session starts. That is where you point it toward the documentation that matters.
Something as simple as "we have docs in the /docs folder, and these 2 files are critical for anything touching the payment system," makes a material difference in how well the agent performs. You are essentially educating the agent to understand more every time it gets a task.
Step 5: Build test coverage
Once your documentation is solid, the next move is automated test suites. Get the AI agents building test coverage across the codebase. If you guide them correctly, they are exceptional at this. If you do not, they will generate thousands of tests that check whether true is true — and that helps nobody.
The reason you do this before you start letting agents write real code is simple: you are about to hand tasks to AI agents you do not fully trust yet. Tests are how you verify they are making things better without quietly breaking something else.
Step 6: Use agents for diagnosis before you let them repair
After documentation and tests are in place, start using agents to diagnose bugs. Not fix them—just diagnose. This is very low risk if you give the agent read-only access to your logs, read-only access to the codebase, and read-only access to relevant data. An agent that does not have write access to a system cannot blow it up.
The sequence is deliberate: documentation, then test coverage, then diagnosis, then broader agentic workflows. Skip steps, and you will regret it.
The Bigger Lesson
What I just described is as much a corporate learning effort as it is a documentation project.
The biggest bottleneck right now is that companies do not yet know how to use the tools they have access to. Because until you've seen an agent do something genuinely useful inside your actual codebase, it is hard to understand what is possible. But once you see it, it is hard to unsee.
I talked recently with a CTO who had his first real interaction with agentic AI. It set off a firestorm in his head. All of a sudden, the question shifted from "Why would I bother with documentation?" to "Wait...what if all that documentation becomes context the agent can actually use?"
Once that clicked, everything else started clicking, too.
BEFORE YOU GO…
Documentation is the entry point. It delivers immediate value, reduces risk at every subsequent step, and builds the context layer that everything else depends on.
If you want to use AI well inside your company, start by making your business legible.
Because in this case, the boring starting point compounds.
Talk soon,
Chris.