AI Doesn’t Modernize a Codebase. Systems Do.

How to turn existing codebases into AI-enabled engineering systems.

Hey!

Chris here. Welcome to Blueprint—the newsletter to help you build a winning engineering team.

Conversations about AI in engineering almost always start from a blank page perspective.

People think about new products, fresh codebases, and teams that can build however they want.

That conversation is exciting and worth having. But it's not where most of the companies I’m working with actually are.

They have software that's been running for a decade or more and a codebase full of decisions nobody fully remembers making.

On top of it all, there’s also the mounting pressure to do something about it.

Fortunately, there’s a specific way through this. Let me break it down 👇️ 

📒 DEEP DIVE

The System is the Strategy

Why AI modernization starts with the way engineering work moves.

The Old Operating Model is the Bottleneck

Once you’re dealing with software that already exists, the problem changes.

What matters is how work moves through the system.

Here’s what I keep seeing.

A company has an engineering staff that is still coding the old way. Every engineer, at best, produces 1 engineer’s worth of code in a day.

That used to be normal. But now the economics are changing.

Engineers are expensive, and that cost eventually makes its way to the customer. And when your customers start getting cheaper, faster, and more effective options thanks to AI, you start losing ground.

That’s usually when the conversation changes.

The company may have known about AI for a while. They may have been skeptical. Their engineers may have been skeptical.

But eventually, leadership looks at the math and realizes it cannot ignore this forever.

And yes, the codebase is old. But the real problem is that the operating model around the codebase was built for a different era.

Work enters the system. A human picks it up, investigates, writes the code, decides what tests to write, and opens the pull request. Then the next ticket comes in, and the whole thing starts over.

That workflow made sense when every unit of progress had to come from one engineer working one task at a time.

It makes much less sense when the company needs 10x, 20x, or 40x the leverage from the same system.

The Team Has to Move With the System

Ironically, changing the operating model is actually primarily a people problem.

Inside the same company, you’ll usually hear 2 different reactions:

  1. “We have to use this.”

  2. “AI just produces vibe-coded slop and there’s no way around it.”

So the company gets stuck. Leadership can see the writing on the wall, but it does not know what to do with it.

This is why most of my engagements start on a Zoom call with a handful of engineers, a project manager, maybe the founder.

The engineers are usually nervous. I can see it on their faces. They hear “AI modernization” and think, “I’m going to lose my job.” I understand why they feel that way, but I tell them the same thing every time:

The only way AI replaces you is if you resist it.

If you learn how to use the tools and help the system get better, your judgment becomes more valuable because it compounds through the workflow.

If you refuse to touch it because you’re afraid, the business still has to move forward.

Because what the company needs most is for the work to move in a new way.

AI Can’t Stay Trapped in One Person’s Workflow

That’s why individual AI adoption is not enough.

One engineer using a tool does not change how the company works.

This is the part that many companies misunderstand.

They think AI adoption means one engineer starts experimenting with a tool like Claude Code or Cursor, figures out a better prompt, and adds something to the codebase.

Sure, that can help the individual. But it does nothing to modernize the company because the learning stays trapped with that person.

To get the most out of AI, you have to take what the team is learning and document it inside the codebase so every other engineer can benefit from it too.

The codebase should not just be a place where code lives. It should become part of the operating system for how the team works with AI.

That’s why the job is to make the existing system AI-operable, in the actual environment the business already depends on.

Start Where Work Already Enters

Once you accept that, the next question is: Where does AI enter the system?

In a lot of companies, the answer is already sitting in front of them.

Most teams use one of a handful of project-management suites. Jira is still the most common. We use ClickUp internally. But the specific tool is not really the point.

What matters is that work has a clear entry point. Whether that’s a feature getting created, a bug being reported, or a request getting moved into the queue. The workflow starts somewhere.

So one way to modernize the system is to wire AI into the place where engineering work already begins.

In one implementation, we stand up a server with an agent on it.

That agent responds to tickets being entered into the project-management system. The agent picks it up and starts working through the actual system.

It already has the codebase and the operating environment tooled. It takes the request and looks through the codebase to figure out which files it needs to read and understand. It builds on the original spec, enriches it, adds pointers to the files that probably need to be modified, and identifies what has to happen.

Then it gets into judgment.

What kind of tests should be written? Unit tests? Integration tests? End-to-end tests?

That decision depends on the nature of the task, the part of the codebase being touched, and the risk of the change, which is the wisdom of years written into the rules.

I know what kind of tests you ought to write under different circumstances, so I encode that into how the system operates.

When the agent finishes, it opens a pull request in GitHub, Bitbucket, or whatever source-control tool the company uses.

That is not the only way this can work, but it’s a useful example of the bigger shift: AI becomes more valuable when it’s wired into the place real work already enters, with enough context and rules to act on that work intelligently.

Why the First 5 Tickets Matter

But wiring it in is only the beginning. The system has to be tuned against reality.

With one recent engagement, we told the company: Give us 5 tickets from Jira. Give us a mix of features and bugs. Give us different parts of the codebase you want us to work on.

That is enough to start, because 5 real tickets show you where the system is right and where it’s wrong.

I read everything the agent does in the beginning. Those first pull requests are hypercritical because they show me what assumptions the agent is making.

It might believe it’s working in a regular, generic codebase. But the real codebase may be very different.

So we watch the output. We look at the mistakes. We get the tooling right. We tweak the rules. We make sure the system is using the right models.

That monitoring phase might last weeks. It might last months. For some companies, it might be ongoing.

But once the system is set up correctly, it starts to run.

And when it’s right, it’s really good.

The ticket gives the agent the task. The codebase gives it context. The rules give it judgment. The tests give it guardrails. The pull request gives the team something to review. The monitoring makes the system better over time.

And that’s how the workflow compounds.

BEFORE YOU GO…

Treating AI modernization like a tooling decision is a mistake.

The tool matters, but the system matters more.

Where does work enter? What context does the codebase give the agent? Whose judgment gets encoded into the rules? What does the system learn from the mistakes?

Those are the questions that determine whether AI becomes a side experiment or part of how engineering actually runs.

And that’s the strategy—AI as part of the engineering system.

Talk soon,

Chris.