How to Actually Deliver Value
Most leadership teams I talk to are wrestling with the same question: How fast can we leverage AI? There’s pressure, sometimes self-imposed, sometimes from the board, to deliver AI transformation yesterday. That pressure creates a temptation to build something ambitious and fully automated right now.
I’ve watched this play out, and it almost never works the way people hope.
There’s a better path, borrowing from how product teams that actually move the needle operate. It has three stages. Each stage teaches you what you need to know before the next one. Start with giving your teams tools and watch what they do with them. Next, automate the workflows that prove repetitive. Finally, move to full autonomy – if and only if the math justifies it.
I’ve seen this work for product development teams, marketing teams, and customer care teams. The pattern is the same.
Stage 1: Give Them Tools, Then Watch
This first stage is so simple it feels like you’re not doing enough. You’re not building anything. You’re equipping and enabling.
- A marketing team copy-pastes product info into ChatGPT to generate social posts.
- A customer care team uses Claude to draft responses.
- Sales researches prospects.
- Finance extracts invoice data.
- Engineering uses Cursor to help with code comments and documentation.
Humans decide everything: When to use the tool; how to prompt it; whether the output is usable; what to do with the result. The AI is an assistant inside a manual workflow.
It can feel slow, but it’s valuable learning time. What actually happens is something more useful: you discover which tasks are genuinely repetitive, which workflows jam people up, which problems your teams actually want solved. You get real data to work from instead of guessing.
There’s a confidence piece too. Your teams develop intuition about what these systems can and can’t do. They learn how to prompt effectively. They hit the edge cases and see the failure modes in real time, and that ground truth matters because it shapes everything that follows.
Stage 2: Automate, But Keep a Human in the Loop
The next thing you look for is repetition. When someone’s running the same prompt over and over, or they’re sharing a working prompt with teammates who have the same problem, it’s time to start building.
This stage is where you create automation that runs the AI process and surfaces the result for human review and approval.
- A support system auto-drafts responses for agents to edit.
- A marketing platform generates social variants for the team to choose from.
- A finance tool categorizes invoices and flags unusual spend for review.
- A sales system scores leads and suggests follow-up timing for reps to execute.
You’re shifting from one person making decisions to many people making decisions faster. AI handles the repetitive thinking. Humans handle judgment, context, and protection. You’re also building a feedback loop. Each approval, and each rejection, teaches the system what good looks like inside your organization.
The jump from Stage 1 to Stage 2 takes engineering work, but now it’s justified because you already validated that the problem is real.
Stage 3: Fully Autonomous
At stage three, we finally try out some full automation. This is the first time we consider having no human in the loop. The system can run end-to-end, with output flowing straight into the business process.
- A marketing platform auto-generates, schedules, and publishes social posts.
- A finance tool auto-categorizes transactions and settles approved invoices.
- A sales system auto-scores leads, assigns them to reps, and schedules outreach.
- A support system auto-routes tickets to specialists and responds directly to customers.
- An inventory system monitors stock levels and automatically places supplier orders when thresholds are breached.
But “Ready for Stage 3” depends on your function and how much risk you can tolerate.
For low-stakes, 80% accuracy might be fine. An auto-generated social post that occasionally needs a tweak? That might be acceptable. Higher-stakes decisions might require 95% or 99% accuracy. If you’re giving medical diagnoses, detecting fraud, or making decisions that could damage customers, you might never remove the human, or only when AI itself advances far beyond where we are now.
Stage 3 isn’t a universal destination. It’s an option, available when the risk-reward equation justifies it.
The Counterintuitive Speed Argument
Here’s what challenges most people’s instincts: when you’re pressured to show AI value fast, the reflex is to skip Stages 1 and 2 entirely. You build ambitious systems immediately with heavy infrastructure investment to reach your vision.
This approach delays real value.
Without the learning that comes from Stage 1, you’re automating problems you’ve guessed at, not problems people have. You burn engineering cycles on workflows that might not create the value you expected. You ship systems where you haven’t actually seen their failure modes in your business.
The paradox: starting with Stage 1, just giving teams tools and watching, is faster in the end. It cuts straight to problems worth solving. It creates organizational conviction about AI through observation, not assertion. It identifies which automation bets are high-leverage before you spend resources on them.
Stage 2 then becomes a focused, high-confidence engineering effort. Stage 3 happens naturally when the data supports it, not when the calendar demands it.
Moving Forward
The three-stage approach isn’t flashy. You don’t get to write a press release about “fully autonomous AI agents.” What you get is value creation anchored in real problems, that scales because your teams understand it, and compounds because you understood what actually matters before you built it.
Start with tools, observe, and automate what works. Move to autonomy only when it makes sense.
This path to value is faster than it appears, and vastly faster than sprinting to Stage 3 without first understanding what’s worth automating.