Gartner says 22% of organisations are getting real value from AI. Here's what the other 78% are missing.

Analysts have confirmed what many of us suspected. Most organisations are not getting meaningful returns from their AI investments. The reasons are not technical. They are structural, and they are fixable.

Gartner published its Predicts 2026: Intelligent Applications report in December 2025 with a number that deserves more attention than it has received. Only 22% of organisations report that generative AI tools return significant value.

That means 78% are spending budget, running pilots, updating roadmaps, and publishing AI strategies while failing to demonstrate that any of it is working.

The report also found that 46% of IT leaders believe AI agents will replace their core enterprise systems within two to four years. And that 77% of organisations plan to prioritise investment in AI-ready data.

Those two figures, combined with the 22%, tell you almost everything you need to know about why the gap exists.

The problem is not AI. The problem is sequence.

The organisations rushing toward agent-based replacements of CRM and ERP systems are skipping steps that have always mattered in enterprise technology. They are designing the ceiling before the foundations are stable.

There is a broader pattern worth naming here. The METR study found that experienced developers were 19% slower when using AI assistance, despite believing they were faster. The dynamic at the organisational level mirrors this exactly. Confidence in AI's potential is outpacing the ability to realise it, and the gap between expectation and return is quietly accumulating.

Gartner estimates that by 2030, no more than 30% of functionality in enterprise commercial-off-the-shelf applications will be replaced by custom-built AI solutions. That number sounds conservative. But consider what it implies: the vendors are not standing still. Salesforce, Microsoft, and Oracle are embedding AI at pace. Most organisations that rush to build custom AI agents today will find that the gap they were trying to close has been closed by the platform they were trying to replace, at a fraction of the cost and risk.

Root cause one: no outcome definition

Gartner predicts that by 2030, 60% of enterprise applications will be selected based on business outcome alignment rather than functionality. The current reality is that most AI adoption is driven by the same functional logic that created the application portfolios organisations already have: we need AI, what does AI do, let's buy or build AI that does those things.

Outcome-led adoption works the other way around. It starts with a specific, measurable business problem, works backward to what capabilities would address it, and selects or builds accordingly. That process is harder and slower. It is also the only process that reliably produces the 22%.

The build versus buy question is collapsing under AI's influence, as I have written about previously. Whether you build or buy, the question that matters is not whether you have the capability, but whether that capability is returning measurable value against a defined objective. Very few organisations are asking the second question with any rigour.

Root cause two: data readiness treated as a future problem

Gartner found that only 14% of application leaders are confident their data is suitably secured and governed to provide value to AI interactions. And yet 77% plan to prioritise investment in AI-ready data going forward.

That gap between current confidence and future intention is where most AI implementations quietly fail. They proceed on the assumption that data readiness will arrive in time, rather than treating it as a prerequisite.

The typical failure path looks like this. An organisation builds or buys an AI capability, discovers that the data feeding it is inconsistent, incomplete, or not structured in a way the model can use effectively, and then attempts to retrofit data governance around a system that is already generating outputs that cannot be trusted. That remediation is significantly more expensive than building the foundation first. It also means the organisation has been making decisions, or allowing others to make decisions, based on AI outputs of unknown reliability.

This is not a new problem dressed in new clothes. It is the same data quality problem that has undermined analytics and reporting for decades, now operating at a layer where the consequences of bad data are less visible and harder to audit.

Root cause three: deployment as strategy

The third failure mode is the most common, and the hardest to challenge internally, because it produces the kind of activity that looks like progress from the outside.

Organisations measure their AI maturity by the number of tools deployed, the number of pilots completed, or the percentage of workflows that have been AI-enabled. None of these measure value. They measure motion.

A 2025 survey found that 56% of IT leaders strongly believe that IT cannot drive AI adoption on its own and requires significant business support. That finding reflects a structural problem that deployment metrics cannot solve. If the business has not defined what value looks like, IT cannot deliver it, regardless of how many tools are in production.

Gartner's research on high AI maturity organisations is instructive here. The common characteristic is not capability breadth or budget. It is centralisation of AI strategy, governance, application development, and data management. In other words, the organisations getting value have joined up the decisions that most organisations are still making separately.

What the 22% actually do differently

The organisations currently in the 22% did not get there by deploying more AI. They got there by deploying it with more deliberation.

They define the business outcome before selecting the tool. They treat data readiness as a blocker, not a parallel workstream. They measure the AI capability against the outcome it was deployed to address, not against a generic benchmark. And they have someone in a position of genuine authority who holds the organisation to account for the gap between investment and return.

That last point matters more than any of the others. Gartner recommends deploying agentic AI only in use cases with low business risk and high technical viability. That is sound advice. But following it requires someone who can make that assessment credibly, challenge vendor claims, and escalate when the evidence does not support continued investment.

The gap between 22% and 78% is not a technology gap. It is a governance gap. And unlike the technology, the governance does not improve automatically with time or with the next model release. It has to be built, maintained, and held accountable.

If your organisation is in the 78%, the answer is unlikely to be a different AI tool.

Tags:AIEnterprise AIAI AdoptionTechnology LeadershipGartnerAI StrategyROI

Want to discuss this article?

Get in touch with our team.