- 1 The most common AI adoption barrier in UK firms is identifying viable use cases — not cost or technology
- 2 Strong use cases are defined by specificity, measurability, and genuine workflow integration, not by impressiveness in a demo
- 3 Use cases that look powerful in isolation but lack commercial anchoring almost never scale
The ONS analysis of management practices and technology adoption in UK firms contains a finding that is, in the context of the current AI landscape, somewhat uncomfortable.
The most common barrier to AI adoption — cited by 39% of UK firms — is not cost. It is not access to technology. It is not regulatory uncertainty. It is difficulty identifying activities or business use cases that AI could meaningfully improve.
This is a useful corrective to the dominant narrative about AI adoption, which tends to frame the challenge as primarily technical or financial. The evidence suggests that for the largest share of businesses that are not yet extracting value from AI, the problem is upstream — it is in the strategic and analytical work of identifying where AI actually belongs in the business, done with enough specificity to be actionable.
That work is harder than it looks. It requires discipline, commercial clarity, and the willingness to say no to use cases that seem appealing but are not genuinely valuable.
Why the obvious choices are often the wrong ones
The use cases that tend to attract the most initial enthusiasm in AI planning conversations are usually the ones that are technically impressive, visible, and easy to demonstrate. Customer-facing chatbots. AI-generated content at scale. Intelligent automation of complex processes. Predictive analytics that produce counterintuitive insights.
Some of these are excellent use cases in the right context. Many of them are solutions looking for problems.
The reasons they consistently underperform as first investments are specific. They tend to require significant data preparation before they produce reliable outputs. They often involve workflow complexity — touching multiple systems, multiple teams, multiple process steps — that makes integration expensive and slow. They require change management that the business is not prepared for. And they are chosen primarily because they are impressive rather than because they solve a specific, high-value commercial problem.
The result is a pattern that has become very familiar: a technically capable system that is demonstrated successfully, deployed with enthusiasm, and used inconsistently by the people it was built for — because the workflow integration is too heavy, the outputs are too unreliable, or the problem it solves turns out to be less commercially important than initially assumed.
The three characteristics of a strong use case
The use cases that reliably produce commercial value share three characteristics.
The first is a clear operational problem with a known cost. Not an aspiration to improve a category of activity, but a specific thing that happens regularly, takes time, and produces a known cost or constraint. “Our proposal writers spend an average of 3.5 hours producing first drafts for new business responses” is a problem. “We would like to improve our proposal process” is not.
The cost does not have to be precisely calculated. But there needs to be a clear intuition — validated at least informally — that this problem is significant enough that solving it would produce meaningful value. Use cases without this foundation tend to produce interesting results that nobody has a strong incentive to scale.
The second characteristic is a workflow that can be genuinely redesigned around the AI output. This is the most consistently overlooked requirement. The value of an AI system is almost entirely dependent on whether its output changes how work is done — not whether the output is technically good.
A document summarisation system that produces high-quality summaries that analysts read alongside the original documents they were already reading does not save time. The same system, integrated into a review process so that the summary is the primary input to the first-pass review and the original document is consulted only when the summary raises questions, produces genuine efficiency. The difference is workflow design, not technology.
DSIT’s AI adoption research shows that the businesses generating the most value from AI are those that have made deliberate choices about how AI outputs fit into their operating processes, not those that have deployed the most capable tools.
The third characteristic is a named owner for the commercial outcome. Not the technical delivery — the business result. Someone who is accountable for whether the use case delivers the value that justified the investment, and who has the authority and motivation to drive the operational changes required to get there. Use cases without this ownership structure drift. The system works. Nobody drives the adoption.
The filtering question that eliminates the wrong candidates
When evaluating a potential AI use case, one question is more useful than any framework: if this system performed exactly as described in the best-case scenario, what would specifically be different about how this part of the business operates, and how would we measure that difference?
If that question produces a specific, measurable answer — time saved, errors reduced, decisions made faster, costs reduced by a calculable amount — the use case is at least plausible. If it produces something like “it would be more efficient” or “we would have better insights,” the use case is not yet defined well enough to invest in.
This is a high bar, and it eliminates many ideas that feel compelling in early conversations. That elimination is the point. The investment saved by not pursuing use cases that fail this test is almost always larger than the value those use cases would have produced if pursued.
What to do with the ideas that do pass the test
Once a small number of use cases have been identified that meet the commercial clarity, workflow integration, and ownership criteria, the sequencing question becomes relevant.
The businesses that scale AI most effectively tend to start with the use case that combines high confidence of success with significant commercial impact. Not the most impressive use case. The one most likely to work, produce visible value, and build the organisational confidence and learning required to tackle more complex problems.
This sequencing logic runs counter to the instinct of many technology and innovation teams, who gravitate toward the most ambitious or technically interesting problem. The ambitious use cases have higher variance. They produce spectacular successes occasionally and expensive failures more frequently. The lower-variance, well-scoped, process-oriented use cases produce consistent returns and the organisational capability to take on more complex problems over time.
McKinsey’s 2025 global survey found that organisations generating the strongest value from AI were more likely to follow a sequenced, disciplined approach to use case selection and deployment than to pursue breadth of deployment simultaneously across the organisation. The signal is consistent: discipline in selection produces better outcomes than ambition in scope.
Relevant service CTA: AI Guidance & Advisory — independent practitioner support to identify the use cases worth backing and build the operational conditions that allow them to deliver.
Related posts: Why UK AI Initiatives Stall After the Pilot Stage | The AI Readiness Checklist for UK SMEs | Why Buying an AI Tool Is Not an AI Strategy
Sources
DSIT – AI Adoption Research (2026)
McKinsey – The State of AI: Global Survey 2025 NCSC – Guidance for using AI systems securely