Three clients. Three lessons. One uncomfortable truth about AI adoption.
Gartner reports that only 22% of organisations are getting significant value from AI. After working inside organisations on both sides of that number, the gap is not technical. It is structural, and the evidence is hiding in plain sight.
Gartner's Predicts 2026: Intelligent Applications report landed last December with a figure that should make every technology leader uncomfortable. Only 22% of organisations report that generative AI tools return significant value.
The rest are spending money, running pilots, and producing reports that say promising results while the return on investment fails to materialise.
I have been inside organisations on both sides of that number. Three engagements in particular illustrate exactly why the gap exists, and what it actually takes to land on the right side of it.
When the vendor becomes the problem
A global humanitarian organisation came to me needing independent scrutiny of a modernisation proposal from their existing development agency. The proposal was substantial, and leadership wanted a second opinion before committing budget.
What I found was significant. The entire codebase sat in a single C# project with no separation of concerns. The agency's maintained core library was outdated and carrying security vulnerabilities. And when I broke down the proposed costs line by line, 60% of the budget was effectively addressing technical debt the agency had created themselves.
The agency had framed their own failures as the client's modernisation cost. Without independent challenge, that proposal would have passed.
This pattern is not unusual. It is what emerges when organisations adopt technology partnerships without the internal capability to scrutinise technical decisions independently. That dynamic becomes more pronounced when AI is layered into the conversation, because the complexity of the claims increases while the client's ability to challenge them often does not.
Gartner's report notes that many organisations prioritise adopting agentic AI without a clear understanding of how it aligns with their specific business goals. The same structural weakness that allowed an agency to obscure 60% of unjustified costs is the weakness that allows AI vendors to sell capability that cannot be evaluated against an outcome. The problem is not dishonesty. It is the absence of independent scrutiny.
When agentic AI actually worked
At a rapidly growing UK PropTech platform, the documentation situation was critical. 390 repositories. 5% documentation coverage. New developers spending weeks getting up to speed before they could contribute meaningfully. The cost of that onboarding friction was real, measurable, and compounding as the team scaled.
We implemented AI-powered documentation generation across the entire repository estate. Coverage went from 5% to 100%. The time to answer how does this work dropped from hours to minutes. Developer onboarding reduced from weeks to days.
This engagement sits squarely in what Gartner predicts will be the 35% of organisations that realise measurable value from agentic AI by 2030. The reason it worked is straightforward: the business problem was specific, the outcome was measurable before a single tool was selected, and the tool was chosen to serve the outcome rather than the other way around.
That sequence matters more than any other factor. Outcome first. Tool second. It sounds obvious. But the majority of AI adoption I see in the market runs that process in reverse, starting with a capability and then searching for problems it might solve.
When architectural discipline beats capability chasing
I spent two years as Fractional CTO for a US wellness marketplace startup. During that time, the business was acquired, pivoted repeatedly, and worked through three development agencies. Each agency left behind problems. Each pivot changed requirements. Each change in ownership reshuffled priorities.
The platform survived all of it without significant architectural changes.
That durability was not accidental. The architecture was designed for volatility from the start: 45+ microservices, each owning their own data, built on Dapr with deliberate separation that allowed components to be replaced without systemic disruption. When an agency failed, their services could be rebuilt without taking down the platform. When the business pivoted, new capabilities could be added without rewiring what already worked.
Gartner warns organisations not to rush into custom AI builds chasing cost savings, noting that returns on AI agents have been highly variable and that implementation will be inhibited by technical debt, a lack of AI-ready data, and organisations' own entropy. The Spa Space engagement illustrates exactly why that warning matters. The organisations that build well for today create options for tomorrow. The organisations that chase capability before establishing foundations spend tomorrow paying for today's shortcuts.
AI is not exempt from this principle. In some ways it amplifies it, because the pace at which AI capability is being added to existing enterprise platforms means that the window in which custom-built agents provide a genuine advantage over embedded alternatives is narrowing faster than most roadmaps account for.
What connects these three
In each case, the outcome that mattered, whether avoiding a costly mistake, reducing onboarding time, or surviving business uncertainty, was defined before the technical approach was selected.
The MSF engagement was valuable because independent scrutiny existed to challenge a vendor claim. The LMS engagement was valuable because documentation coverage and onboarding time were measurable targets before any tool was evaluated. The Spa Space engagement was valuable because architectural resilience was treated as a business requirement, not an engineering preference.
The 78% of organisations failing to get value from AI are not failing because the technology does not work. They are failing because they have deployed technology before establishing what value would look like, and without the governance structures to know when value is not arriving.
That is not a technology problem. It is a leadership problem. And it is the kind of problem that an independent technical perspective, one that sits outside both the vendor relationship and the internal pressure to show AI progress, is positioned to address.
If your organisation is measuring AI adoption by the number of tools in production, you are measuring the wrong thing. The number that matters is the one Gartner keeps publishing: 22%.
The question worth asking is which side of it you are actually on.
