Skip to content
Optimation Group April 16, 2026 4 min read

The 80/20 rule for AI at work

Most people are asking the wrong question about AI. The question isn't "will AI take over my job?" It's "how much of my job should I actually hand to it?"

Aaron McLean, a tester at Optimation, has worked out his own answer: 80/20. Roughly 80% of what he produces now starts with AI. The test scenarios, the documentation, the automated test suites he's building in Visual Studio Code are all things he couldn't have done six months ago. But don't let the numbers mislead you. The 20% isn't the smaller contribution. It's the one that makes everything else work. That's where Aaron steps in: reviewing, adjusting, and applying the judgement that comes from actually knowing the system and the context. Take that 20% away, and the 80% is just output. With it, it's work worth shipping.

It's a practical mental model. And it's one a lot of people are landing on, even if they don't use those exact numbers.

But there's more than one way to make it work

Anthony Chan, who works in testing and automation, takes a different approach. For straightforward tasks, AI is a useful pair. But for systems with complex business requirements, he's found that handing the wheel over too early creates extra work. Verifying outputs, hunting for gaps, fixing assumptions that were wrong from the start all add up. His workflow flips the ratio: he drafts the scenarios himself first, then brings AI in to review what he might have missed.

office

Yorick Canales, a Business Analyst, sits somewhere in between. User stories, report structures, early-stage drafts: AI gets him moving faster. But he's still the one validating, tweaking, and making the call on what's right.

Three people, three different workflows, one consistent thread: AI works best as a starting point or a sounding board, not a replacement for the person who understands the work. Where the 80/20 split lands depends entirely on what you bring to the table. Aaron can let AI carry 80% because he has the testing knowledge to know when something's right, when it's close, and when it needs to go back. Anthony has enough experience to spot where AI-generated scenarios fall short before they waste anyone's time, which is why he drafts first and reviews second. Yorick can validate and refine AI-generated user stories quickly because he already knows what a good one looks like. The pattern is the same across all three: deep domain knowledge isn't optional, it's what makes the split work. Without it, you're not getting 80% for free. You're just inheriting 80% of problems you can't identify.

8020 header

What happens when you get the split wrong?

The risk of handing over too much too early isn't that AI produces nothing. It's that it produces something that looks right. Plausible structure, confident tone, reasonable conclusions. For low-stakes work, that's fine to catch in review. For complex systems or client-facing outputs, a convincing result that's subtly wrong can create more rework than starting from scratch would have.

Anthony's instinct to draft first and review second isn't caution for its own sake. It's a considered response to that specific risk. The more consequential the output, the more important it is that the human expertise goes in at the start, not just the end.

Getting the split right means knowing which category your work falls into, and being honest about it.

How to find your own ratio

The 80/20 rule isn't a fixed formula. It shifts depending on the complexity of the task, the consequences of getting it wrong, and how well you can specify what you actually need. A rough way to find your own split is to ask three questions about any given task.

  1. How well can I describe what good looks like? If you can articulate the output clearly, AI can get close to it. If you're still working that out yourself, you need to go first.
  2. How much does domain knowledge matter here? The more the quality of the output depends on knowing the system, the client, or the history, the more your input needs to anchor it.
  3. How costly is a mistake? For low-stakes drafts, let AI lead and review at the end. For anything where errors have real consequences, your expertise belongs at the beginning of the process.

Work through those questions and the right ratio usually becomes clear. It won't be the same for every task, and it will shift as you get more experienced with what AI can and can't reliably do.

Grace_image_showing_AI_doing_80_of_the_work_while_a_human_doe_4d1c2a06-d2d6-433f-82e7-ac22d919adab_2

So what does this mean in practice?

If you're waiting for a single right way to use AI at work, you're going to wait a long time. The more useful question is what does good look like for your role, and how much of that can AI reliably carry?

Start there, and adjust as you go.

What's your current split, and has it changed the way you think about your work?

 

Contributors: