CFO’s and CIO’s in organisations that have deployed AI in the last two years very likely have a new and uncomfortable expense line. Call it $200,000 to $800,000 annually - depending on headcount - for AI licences across the employee base. Copilot, Claude, ChatGPT Enterprise, maybe a vertical tool or two for the legal or marketing team. And if they're honest about what they can point to in return, the number is uncomfortable.
This phenomenon has been called “shelfware” before or “pilot purgatory” in the more recent AI context. And it’s more common than most organisations are willing to admit publicly.
One of the most striking revelations from our survey of 100 buyers of AI Enablement services revealed the typical pattern of AI investment in organisations. And within it lies a universal disconnect: while 78% of organizations prioritized tools for their initial AI investments, every single respondent later agreed that software alone cannot drive meaningful impact. This wasn't a mere majority or a strong trend; it was a total, unanimous consensus.

The tools-first pattern was rational at the time
Tools are the lowest-friction entry point into enterprise AI. No deep integration with internal systems. Faster to deploy. Easier to get a budget for. In 2023/24, the pace of model releases made it feel urgent to just get something in. And the demos truly delivered. An AI-written meeting summary or demand extrapolation based on some existing data looked like magic for a non technical functional leader. They opened their wallets and hoped for the best.
In the US, only 4% of organisations started with training rather than tools, with the UK and EU not far behind at 14%. The instinct was almost entirely tools-first.
The procurement cycles were short. The strategy was: get the tools deployed, then figure out the rest.
The rest, it turns out, is most of where the ROI comes from.
Moderate ROI is the hardest problem to solve
Before investing in enablement services, 72% of organisations in our survey reported only moderate ROI from their AI efforts. Another 15% reported minimal ROI. Just 9% reported a significant impact.

The 72% in the middle is the number that deserves attention. These aren't organisations that failed at AI. They tried it, got something out of it, and now can't quite explain what changed. Moderate ROI is the zone where you know AI works in principle but can't point to anything that actually changed how the business operates.
Workday's recent investor report adds texture to why this happens. Their finding: AI increases gross productivity, then destroys roughly 37% of it through rework. They call it the "AI tax." For every 10 hours AI saves, about 4 hours go back into fixing AI output. Only 14% of users end up net-positive.
Does logging into ChatGPT improve someone's productivity? Maybe it does. Maybe it doesn't. Definitely not when the user then spends three hours correcting what the tool produced.
But here's the thing: because there's no universal measure for AI ROI, organisations resort to what can easily be measured: logins, licences activated, hours of training completed. These metrics are visible and concrete. Whether they correlate with actual AI impact is a different question.
Structural barriers to meaningful impact
Our survey asked why organisations invested in enablement services after their initial tool deployment. The most common answers: 28% cited difficulty translating AI into practical workflows, 26% cited technical and systems integration complexity. Combined, that's 54% pointing to execution capability as the barrier - not tool quality.

The tools were fine. The organisations lacked the internal expertise to connect AI capability to how their business actually creates value.
This is a structural gap. You can't hire your way out of it with a single AI lead. The person in that role needs to stay current with a tools landscape that's changing faster than any hiring cycle can accommodate. They need significant political capital inside the firm to move resistant teams. They need communication skills to engage employees who are, at best, cautiously curious and, at worst, actively disengaged.
A RevOps leader at a successful tech scale-up recently told us they're replacing their internal team of five with a RevOps consulting firm on retainer. The rationale: the field moved too fast for ordinary hires to keep pace. Hiring experts is a better use of funds.
I suspect this pattern will become more common, not less.
The second structural barrier is data. Many organisations that are stuck because their underlying systems are simply not ready for AI: not consistent enough, not connected, gaps throughout. Fixing that is a separate, unglamorous, expensive, and largely human effort. There are no AI tools to shortcut it. And most organisations are not eager to fund it.
The training that was purchased is not the training that was needed
The survey shows that workforce training was the most widely purchased AI enablement category: 87% of organisations bought some form of it. If you look only at that number, it seems like training has been done.

It hasn't.
Purchasing training once is not the same as building consistent AI capability across an organisation. What most organisations bought was an introduction. What they need is a practice.
Personal computers took approximately 20 years to get meaningfully adopted across the workforce. Cloud applications took 10–12 years. The expectation that AI adoption can be accomplished through a two-hour remote workshop is, in hindsight, difficult to defend.
The adoption timelines for prior technology shifts weren't slow because the technology was weak. They were slow because changing how people work - actually changing it, not just giving them access to new tools - takes sustained effort, repeated exposure, and redesigned workflows. Workday's data makes this precise: in organisations where fewer than 25% of roles are genuinely AI-ready, net productivity actually drops. AI bolted onto 2015 job descriptions doesn't produce 2026 outcomes.
Unconstrained budget allocation data from our survey tells a revealing story: 50% of respondents would direct the largest share to technical integrations, 31% to workflow redesign, and only 15% to workforce training. Leaders feel the integration wall most acutely because it's visible and concrete. The training gap is harder to see, but its a very real issue standing in the way of adoption.
The tools-first cohort is enormous and largely stuck. These brave early adopters didn't fail at AI: they got partial results and then hit a wall that more tools won't move. The wall is made of three things: a capability gap that can't be hired away with one AI lead, data infrastructure that was never ready, and training that was purchased but not embedded.
The organisations that are getting real value - Workday's "Augmented Strategists," the 14% who are net-positive - share a common pattern: they invested in skills, redesigned roles around AI, and measured outcomes rather than activity. They are very rare to come by. And that is good news for everyone working in AI enablement services. Even when the task at hand will often involve solving someone else’s mess.
– Daria

