logo Contact
The AI Execution Gap

Strivenn Thinking

Pull up a seat at our
digital campfire
where story, strategy,
and AI spark
new possibilities
for sharper brands
and smarter teams.

 

Marketing Strategy

ELRIG Drug Discovery 2025: 68% AI Optimistic, Only 7% Winning

By Matt Wilkinson

New survey data collected by Strivenn at ELRIG's Drug Discovery 2025: A Festival of Life Science reveals the execution gap holding back commercial teams.

 

Your team has access to ChatGPT or CoPilot. Maybe someone even attended an AI webinar.


And still, 69% of your people don't use AI daily.


That's the paradox revealed in our ELRIG Drug Discovery 2025 exhibitor survey of 107 life science companies. The gap between AI optimism and AI execution is vast, measurable, and costly. And it's not because your team lacks ambition or budget.


It's because access is not adoption.


The Real Numbers Behind the AI Gap

Here's what 107 exhibitors across tools, reagents, automation, CRO/CDMO, and SaaS companies told us:


68% are cautiously optimistic about AI. Not fearful. Not hyped. Discerning. These are scientists trained to test before trusting, evaluating AI the same way they'd evaluate a new assay: does it improve accuracy, speed, or reproducibility?


But only 7% are power users - people who've integrated AI extensively into their workflows.


The majority - 44% - are light users. They've tried AI. They've overcome the initial scepticism. They've written a prompt or two. But they haven't seen enough value yet to make it a daily habit.


These light users are like gym members who bought the trainers but never show up.


They don't need more information. They need a win.


Why Having Tools Doesn't Mean You're Using Them

Here's the brutal reality: 37% of organisations have AI tools but no structured programme. They've handed out licences and called it AI transformation.


The result? 69% of individuals in those organisations don't use AI daily despite tool availability.


Contrast that with companies running structured pilots or operational AI programmes: 53-73% daily usage.


Having access doesn't make a company "AI mature" any more than owning Excel makes you a data analyst. AI maturity depends less on tool count and more on repeatable systems - documented prompts, best-practice libraries, and feedback loops that turn experimentation into execution.


Real readiness shows when outputs start feeding into performance metrics, not PowerPoint slides.


The Confidence Gap: Experience Breeds Excitement

Power users are twice as excited about AI as regular users (57% vs 28%).


Non-users? Zero per cent excited.


Sentiment follows usage, not the reverse.


This isn't a knowledge problem. It's a confidence problem. And confidence comes from pattern recognition - once you've seen AI actually deliver on small things, it stops feeling like magic and starts feeling like muscle memory.


Power users aren't braver. They've simply built reference points for success.


Your job isn't to evangelise. It's to create those reference points faster.


What Light Users Actually Need

Light users represent your largest conversion opportunity. They've crossed the scepticism threshold. They just haven't experienced transformative value yet.


What they need isn't another webinar about AI's potential. They need something that takes a tedious 30-minute task and makes it a 3-minute one.


75% of current AI use is text generation - emails, LinkedIn posts, slide decks. It's the comfort zone because it's tangible, visual, and feels safe. Everyone writes emails or slides.


But the real maturity test is whether teams evolve from text to thinking: scenario modelling, data interpretation, creative strategy. Think of it like learning microscopy - first you focus, then you interpret. AI literacy is about moving from output to insight.


Companies that stay stuck at "content" will see diminishing returns because they're solving for productivity, not originality.


The SaaS Playbook: What Lab Companies Can Copy

SaaS companies lead AI maturity by a wide margin - 3.56 versus 2.96 for reagent companies.


They didn't get there by having bigger budgets or fancier tools. They got there by treating AI as infrastructure, not novelty. It's baked into daily workflows, not bolted on.


The SaaS operating rhythm:

  • Agile pilots with clear metrics
  • Transparent measurement
  • Cultural permission to experiment
  • Test, measure, iterate relentlessly

In contrast, many life science firms are still running AI like a side project - something that happens when marketing has spare time, or when the CEO mentions it in an all-hands.


The takeaway isn't "become software companies." It's adopt their operating rhythm. SaaS leaders didn't wait for certainty. They created momentum through use-case velocity.


The Bottom-Up Revolution (With or Without You)

Here's the fascinating part: even in organisations classified as "not using AI," 38% of individuals experiment personally.


They're building shadow-AI workflows. ChatGPT drafts. Prompt templates. Internal cheat sheets. Innovation bubbling from the bottom up.


They're operating as Secret Cyborgs - a massive data security risk.


Employees are quietly finding faster ways to get things done whilst leadership debates governance frameworks.


Instead of fearing this, formalise it. Capture what's working and scale it. Identify your internal AI champions - the regular and power users who are already getting results - and empower them to teach others.


Innovation is rarely born in a meeting room. It starts with a frustrated person finding a faster way to get something done.


What's Actually Blocking You (It's Not What You Think)

When we asked what prevents wider AI adoption, budget ranked last.


The real blockers:

  • Data quality (44%)
  • Governance and compliance concerns (36%)
  • Skills gaps (24%)

This hierarchy exposes the truth: execution capability, not justification.


Commercial teams believe in AI value. They just lack clean data, clear guardrails, and competent operators.


Data is the oxygen of AI. If it's polluted, everything downstream suffocates. Governance and data quality aren't glamorous, but they're the bedrock of reliable models and automation.


This is a cultural issue as much as a technical one. Does the team value documentation, data hygiene, and process discipline? Without structure, AI becomes like a lab full of unlabelled samples: full of potential, but unusable.


From Caution to Action: Closing the Execution Gap

Caution isn't resistance. It's discernment.


The danger for vendors - and for commercial teams - is overpromising "transformation" when the audience wants proof of incremental benefit. Messaging that focuses on specific use cases ("AI that cuts poster prep time in half") will always outperform abstract futures talk.


Here's your action plan:


For light users: Pick one tedious task. Document it. Find the AI solution that cuts it in half. Measure the time saved. Share the result.


For teams without structure: Stop calling it an "AI programme." Start with use-case libraries. What prompts work? What workflows save time? Document and share them.


For leaders: Identify your power users. Give them time to teach. Turn their experiments into team capabilities. Measure adoption the way you measure any other commercial metric - ruthlessly and regularly.


For everyone: Remember that confidence compounds with small, repeatable successes. One solved problem builds the courage to tackle harder ones. Peer learning accelerates that growth because credibility transfers better through colleagues than through consultants.


The Execution Mantra

The world doesn't need another AI evangelist. It needs more operators who know how to ship results.


Adoption happens when AI outputs connect directly to KPIs - time saved, leads qualified, reports delivered. "Execution over inspiration" should be the mantra: build processes that scale, not one-off hero experiments.


AI maturity could become the new differentiator between thriving and lagging commercial teams. But maturity isn't measured by ambition. It's measured by repeatability.


Your competitors have the same tools you do. They're facing the same data quality issues, the same governance questions, the same skills gaps.


The difference between winning and waiting is what you do in the next 90 days.


Light users need wins, not webinars. Teams need structure, not more software. Leaders need to capture what's already working in the shadows and scale it into the light.


The AI execution gap isn't about technology. It's about turning cautious optimism into confident action.


Download the full report to see where your segment sits, identify competitive blind spots, and build your 2026 commercial strategy on evidence, not assumptions.

 

Access the full report

 

 

Q: We're a small team. How do we start building AI capability without a massive programme? ▼

A:

Start with your light users - the 44% who've already experimented. Call a 30-minute working session with three people who've tried ChatGPT. Ask them to demonstrate one thing that saved them time. Document the prompt, the workflow, and the time saved. Share it in Slack or Teams. Next week, pick a different task. Within a month, you'll have a use-case library built by your team, not consultants. Budget required: zero. Cultural shift: massive.

Q: How do I know which AI use cases will actually matter to our customers? ▼

A: 

Test your messaging and priorities against your actual buyer perspective. Tools like PersonaAI let you pressure-test positioning, value propositions, and use cases through the lens of your target personas before you invest resources. Ask your synthetic customer: "Would this AI capability matter in your buying decision?" If the answer is tepid, refine before you build. It's faster and cheaper than discovering misalignment after launch.

Q: Our sales team is sceptical that AI will help them. How do I get buy-in? ▼

A:

Don't sell them on AI's potential. Show them one win. Pick your best rep - someone respected, pragmatic. Work with them to use AI on one specific task they hate: competitive battlecard updates, prospect research, or follow-up email personalisation. Measure the time saved. Have them demo it at the next sales meeting. Peer credibility converts sceptics faster than any vendor presentation. Start with influence, not mandates.