A new study conducted by IBM just confirmed what we've been saying for months: 66% of enterprises report significant AI productivity improvements. That sounds impressive until you see the breakdown.
72% of large enterprises are seeing gains. Only 55% of SMEs report the same.
That's a 17-point gap. And it's widening.
Here's what's fascinating about the IBM findings: 41% of respondents anticipate returns on AI investments in under a year. Fast ROI. Clear value. Yet smaller organisations are falling behind.
Why?
It's not about money. It's about readiness.
Our own research reveals the same pattern. We recently surveyed life science marketing teams at ELRIG (the full report will be out soon) – only 9% cited budget as the problem. The real obstacles? Data quality, governance, and skills. The execution gap, not the funding gap.
Large enterprises aren't winning because they have bigger AI budgets. They're winning because they've invested in the infrastructure that makes AI work: governance frameworks, training programmes, and clear accountability.
Buried in the IBM report is the solution most organisations are missing. IBM explicitly advises: "Establish a cross company 'AI Board' to mitigate risk: The AI Board's role is to define ethical principles and risk appetite, and review higher risk AI use cases before they are implemented. This, combined with increased AI literacy, will give business units a high level of autonomy to implement AI use cases with confidence."
Read that again: governance plus literacy equals autonomy.
This isn't theoretical. It's operational. IBM found around a third of respondents are already using AI to accelerate innovation timelines (36%), shift to continuous AI-driven decision-making (32%), and redesign value streams around AI (32%).
Those organisations aren't just automating tasks. They're fundamentally changing how work happens. But only after building the governance and literacy foundation.
We've seen this play out in life science companies firsthand. A global firm issued a full ban on generative AI until internal policies could be finalised. Sensible, right?
Except marketers across regions were quietly using ChatGPT through personal accounts. No approvals. No audit trail. No oversight.
We call this the "Secret Cyborg" phenomenon. People using AI in secret, increasing both productivity and risk. It's more common than leadership realises.
The 2025 EU Joint Research Centre report found that only 17% of SMEs had implemented any form of AI training, compared to over 60% of large companies. That's a staggering gap in an era when the EU AI Act requires organisations to ensure their people have "a sufficient level of AI literacy."
Large enterprises are building training programmes. Smaller teams are hoping for the best.
Some worry that governance will slow innovation. The opposite is true.
ISO 42001, the world's first international standard for AI management systems, doesn't restrict AI use. It creates the guardrails that make safe experimentation possible.
Think of it this way: compliance teams don't slow drug development by insisting on clinical trials. They create the framework that makes approval possible. AI governance works the same way.
When your team understands what AI can and can't do, when cases require review, when they can explain model decisions to stakeholders, they move faster, not slower. That's what IBM means by "autonomy with confidence."
24% of IBM's respondents credit AI with fundamentally changing their business models. Not incrementally improving them. Fundamentally changing them.
Your competitors in that 72% aren't just getting productivity gains. They're redesigning workflows, accelerating decision-making, and building competitive advantages that compound daily.
Every quarter you wait is a quarter your competition spends building institutional AI capability. Training teams. Establishing governance. Capturing productivity gains.
The SME productivity gap isn't closing by accident. It's widening by design – specifically, by the design choices large enterprises made months ago to invest in readiness.
AI literacy isn't abstract. It's concrete and role-specific:
Marketing teams need to understand prompt engineering, output verification, and compliance requirements for regulated content. Customer success teams need to recognise when AI-generated responses require human review. Leadership needs to know which AI use cases require board-level approval.
Culture-first governance means building this capability systematically. Auditing your AI stack. Mapping impact by role. Defining literacy expectations. Tracking progress. Making it continuous.
The six-step framework we've developed to develop AI literacy gives you the roadmap. But frameworks don't build capability. Training does.
IBM's data makes one thing clear: the productivity gap between AI-ready organisations and everyone else is real, measurable, and growing.
The question isn't whether to invest in AI literacy and governance training. The question is how much longer you can afford not to.
Because whilst you're deciding, your competition is already training their teams, establishing their AI Boards, and capturing the productivity gains that compound into strategic advantage.
The 17-point gap won't close itself.
We've trained life science commercial teams on ISO 42001-aligned AI governance and practical AI literacy. Our approach combines regulatory compliance with role-specific capability building, so your teams move faster with confidence, not fear.
Book a conversation about AI literacy and governance training