AI Strategy
Your AI Vendor Says You're Deployed. Your KPIs Say Otherwise.
Vivek Ravindran · March 3, 2026 · 7 min read
Here's a number that should stop every CIO mid-budget-cycle: 56% of CEOs say they have not realised either revenue increases or cost reductions from AI in the last 12 months. Not underwhelming returns. Not slow returns. No returns.
That's from PwC's 2026 Global CEO Survey — 4,450 CEOs across 95 countries. These are companies that have deployed AI, staffed AI teams, and spent real money. And more than half of them can't point to a single financial outcome.
Meanwhile, two-thirds of organisations report productivity and efficiency gains from AI, according to Deloitte's 2026 State of AI in the Enterprise report. Both of these things are true at the same time. And the gap between them is the most expensive problem in enterprise technology right now.
The Productivity Illusion
Here's what's happening. AI tools — Copilot, chatbots, document processors, code assistants — genuinely make individual tasks faster. A report that took four hours takes one. A customer query that required three clicks requires none. A first draft that took a morning takes ten minutes.
These are real improvements. People feel them. Managers notice them. When you survey your workforce, they'll tell you AI is making them more productive. And they're not wrong.
But productivity is not an outcome. Productivity is an input. And the gap between "people are faster" and "the business is better off" is where billions of dollars of AI investment currently live, unaccounted for.
Fewer than a third of executives say they can confidently measure AI ROI, according to IBM Think Circle research. Not that they've achieved it — that they can even measure it. The rest aren't failing to get returns. They're failing to know whether they're getting returns. That's a different — and arguably worse — problem.
When you can't measure something, you can't manage it. And when you can't manage it, you certainly can't scale it. So what happens? The AI programme keeps running. Budgets get renewed because nobody wants to be the person who killed the AI initiative. And the organisation accumulates more and more deployed AI with less and less evidence that any of it matters.
Deployed Is the New Vanity Metric
Enterprise technology has always had a weakness for metrics that feel meaningful but aren't. Server uptime. User adoption rates. Feature utilisation percentages. They all share the same characteristic: they measure activity, not value.
AI has created a new one: "deployed."
"We've deployed Copilot to 3,000 users." "We've deployed an AI-powered customer service bot." "We've deployed predictive analytics across our supply chain." Deployed. Deployed. Deployed.
But deployed to what end? Against which KPI? With what measurable impact on the business problem it was meant to solve?
The uncomfortable truth is that most enterprise AI implementations were never designed to answer these questions. They were designed to get AI into the organisation — to activate licenses, to enable features, to complete a project plan. The success criteria were deployment-shaped, not outcome-shaped. And so deployment is exactly what they achieved.
This isn't a failure of technology. The technology works. It's a failure of design. The implementation was structured to produce a deployment, not to produce a business outcome. And those are fundamentally different objectives.
The Measurement Gap Is a Design Gap
The instinct, when confronted with poor AI ROI, is to reach for better measurement. Build a dashboard. Hire a data analyst. Commission a study. If we could just measure the impact more precisely, we'd see the value.
But you can't measure your way out of a design problem.
If an AI implementation was designed to "deploy Copilot to the finance team," and Copilot is now deployed to the finance team, the project succeeded on its own terms. The fact that nobody defined which finance process should improve, by how much, measured how, and validated within what timeframe — that's not a measurement failure. That's a design failure. The absence of measurable outcomes wasn't an oversight. It was baked into the project from day one.
IBM's CEO study puts a finer point on it: over the past three years, only 25% of AI initiatives have delivered their expected ROI, and just 16% have scaled enterprise-wide. These numbers aren't surprising when you consider that most AI initiatives were never designed with a specific, measurable outcome as the success criterion. They were designed with deployment as the finish line. And they crossed it.
What the Minority Do Differently
The minority of executives who can measure AI ROI aren't using better dashboards. They're running fundamentally different programmes.
They start with a business problem, not a technology. Not "let's deploy Copilot" but "let's reduce invoice processing time by 40%." The technology is selected to serve the outcome, not the other way around.
They define kill criteria upfront. If the target KPI doesn't move within a defined validation window, they stop and redirect. This sounds obvious but it's almost never done. Most AI projects have expansion plans for when they succeed and no plan at all for when they don't.
They validate before they scale. Instead of deploying to 3,000 users on day one, they deploy to 30 users against a specific use case with a specific KPI target. If it works, they have evidence. If it doesn't, they've spent weeks, not months, finding out.
And critically, they separate "the technology works" from "the business outcome improved." A chatbot that answers 90% of queries correctly is a working technology. A chatbot that reduced support costs by 20% is a business outcome. These are not the same thing, and confusing them is the single most common mistake in enterprise AI.
Why the Structure Won't Fix Itself
If the fix is so straightforward — define the outcome, validate against a KPI, kill or scale based on evidence — why isn't everyone doing it?
Because the incentive structures don't support it.
Vendors are incentivised to sell licenses. Success for them looks like broad adoption — seats activated, features enabled, platform embedded. They are not incentivised to help you discover that three of your five AI use cases should be killed before you spend another dollar on them.
System integrators are incentivised to bill hours. A 2-week sprint that validates or kills a use case is a revenue problem. A 6-month programme that deploys AI across four business units — regardless of whether any of them produce measurable outcomes — is a revenue solution.
Internal teams are incentivised to show progress. Nobody gets promoted for recommending that the company kill its AI initiative. People get promoted for scaling it, expanding it, and presenting it at the next leadership offsite.
Every stakeholder in the current system benefits from deployment. Nobody benefits from the uncomfortable question: but is it actually working?
The 56% Problem Is a Solvable Problem
None of this is inevitable. The companies in PwC's "vanguard" — the roughly one in eight that are achieving both additional revenues and lower costs from AI — aren't doing anything magical. They're applying discipline that the rest of the market skips.
They connect AI initiatives directly to strategic outcomes before a single line of code is written. They treat every use case as a hypothesis to be validated, not a project to be completed. They invest the minimum to prove whether the hypothesis holds, and they make explicit decisions to kill, pivot, or scale based on the evidence.
This is what we call Solution-Outcome Fit. It's the discipline of proving — not assuming, not projecting, but proving — that a specific AI implementation produces a specific business outcome before committing to scale. It's the same logic that venture capital applied to startups decades ago: don't pour money into something that hasn't demonstrated it works.
The difference between the minority who can measure ROI and the 56% who've seen no financial benefit isn't technology. It's whether anyone designed the implementation to produce a measurable outcome in the first place.
Three Questions for Monday Morning
If you're reading this and recognising your own organisation, here's where to start.
Ask your implementation partner what happens if it doesn't work. If you're midway through an AI programme, ask the team leading it: "If this use case doesn't move the target KPI within the next 30 days, what's the plan?" If the answer involves extending timelines or expanding scope, you're in a deployment programme. If the answer involves redirecting investment to the next highest-probability use case, you're in an outcome programme.
Audit your success metrics this week. Pull up the dashboard, the status report, the steering committee deck — whatever your AI programme uses to track progress. Count the metrics. How many measure deployment (users activated, features enabled, sessions completed)? How many measure business outcomes (cost reduced, revenue influenced, cycle time shortened, error rate decreased)? The ratio tells you everything about whether your programme was designed to produce value or produce activity.
Pick one use case and apply the test. Choose your highest-stakes AI initiative. Define the single KPI it should move. Define the magnitude of movement that would justify continued investment. Define the timeframe. Now look at whether anyone is actually measuring that. If not, you've just identified why your AI programme feels productive but doesn't appear in the P&L.
Sources
PwC, 29th Annual Global CEO Survey (2026). 4,450 CEOs across 95 countries. pwc.com/gx/en/ceo-survey/2026
Deloitte AI Institute, State of AI in the Enterprise, 2026. 3,235 senior leaders surveyed August–September 2025. deloitte.com
IBM Institute for Business Value, CEO Study: AI and the Enterprise (2025). ibm.com/think
Alkemy Cloud helps enterprises close the gap between deployed AI and delivering AI. We apply Solution-Outcome Fit — validating that each AI use case produces a measurable business outcome before it scales. If you're seeing the pattern described in this article, take our AI Readiness Self-Assessment to understand where you stand — or get in touch to discuss what an outcome-focused approach looks like for your organisation.