Getting started with AI is easy. Knowing what to do with it is a different skill entirely.
For most of the last decade, the constraint in technology delivery was execution. Getting things built took time, and that time was the problem to solve.
AI has changed that. Execution is no longer where the pressure sits. What we're seeing across our teams at Optimation is that the bottleneck has moved upstream, into the quality of the thinking and direction that goes in before AI touches anything. The teams getting the most out of AI are not necessarily the ones with the most advanced tools. They're the ones who know their domain well enough to direct it precisely.
That shift is worth understanding, because it changes what good actually looks like.
Jeshua Hertzke, one of our engineering team leads, put it plainly:
"The biggest shift for me has been where I spend my thinking time. The heavy lifting now goes into precisely articulating what needs to be built and why, because AI handles execution well, but only when given the right context."
That's a meaningful reframe. If AI can handle execution, the bottleneck moves upstream. It moves into clarity: the quality of the brief, the sharpness of the question, the precision of the problem statement. The people who get the most out of AI are not necessarily the most technically capable. They're the ones who can think clearly about what they want and communicate it without ambiguity.
This doesn't diminish the technology. It clarifies where the human contribution matters most.
Anthony Brandon works in legacy .NET codebases, not exactly the environment people imagine when they picture AI-assisted development. But his workflow makes the case better than most.
His approach to each piece of work begins not with a prompt but with a prepared set of instructions that governs the full development cycle as a strict sequence: fetch the requirement, plan the implementation, write the code and tests, verify it works in the target environment, confirm the data, update the work item, and document the change. Each stage must complete and be confirmed before the next begins. That sequencing is not a convenience. It is how errors get caught before they compound. Anthony maintains separate versions of this for each codebase he works in, each one tuned to the specific architecture, constraints, and tooling of that environment.
This is not a workflow where a human steps back and lets AI run. It is a workflow where a human has done the harder thinking up front: defining what the loop looks like, where the boundaries are, and exactly when it should stop.
What makes the approach work is specificity at the level that actually matters. Anthony does not just tell the AI what to build. He defines the architectural pattern the code must follow, the boundaries between each layer, and what each component is and is not responsible for. He specifies which constraints are non-negotiable, because in a legacy environment with strict browser compatibility requirements, a technically clever solution that does not run in the target environment is not a solution at all. The AI does not get to decide those things on its own. They are defined up front, in the instructions, before any work begins.
There is also a hard stop built in after planning. Before any code is written, the proposed approach is presented for human review. That gate is structural, not discretionary. It does not depend on the AI judging whether its own plan is sound. A human looks at it first. That single choice is what keeps the loop from confidently producing the wrong thing at speed.
The brief is not just the input. It is the governance layer.
"People think adopting AI is the hard part. It is not. The real challenge is directing it with enough domain knowledge and governance to get correct outcomes, not just fast ones."
Jeshua's observation that AI is now woven through the full delivery lifecycle, from early discovery on Miro boards through specification authoring and code generation to document extraction pipelines running on Azure, reflects something broader than tooling. At every stage, someone still needs to know enough about the domain to frame the right question, set the right constraints, and recognise when the output has missed the mark. Our Wellington team agreed at a discussion over lunch: someone still needs to be in the room, asking the right questions and directing the work. What AI does is lift the quality of what people are already producing. It does not replace the person doing the thinking.
The work Anthony's team is now delivering is more complex than what was possible before, not because AI has removed complexity, but because it has made that complexity tractable. Work that would previously have been parked as too slow or too hard is getting done. That only happens because the people directing the AI understand the problem well enough to hold it to a standard.
One other shift is worth naming. The ability to tell AI plainly that something is wrong, and to say it without any social cost, genuinely changes how fast problems get fixed. The iteration tightens, errors get resolved quickly, and the loop keeps moving. It is a more meaningful acceleration than it first appears, and it is worth a conversation of its own.
The challenge, then, is not access to the tools. It is building the domain knowledge and the discipline to direct them well. Anyone can prompt. Not everyone can govern.