logo BOOK A GROWTH CONSULTATION
logo header splice of life rectangle 2

 

S2 Ep3: Bridging the AI Enthusiasm Gap

By Matt Wilkinson

AI makes scaling easy - the winners use it to do better, not just more, by keeping human judgment in the loop.

 

Shownotes

Most life science teams are stuck talking about AI transformation while competitors execute it. The gap between 68% AI optimism and 7% power users isn't about awareness - it's about execution.

This episode is for biotech marketers and product managers navigating AI adoption and product development decisions. Matt and Jasmine explore why organisations retire risk by standardising rather than solving creatively - a pattern that undermines both AI implementation and product differentiation. The key insight: AI makes it dangerously easy to produce more without asking if the strategy underneath is broken. The winners use AI to do better, not just more.

What you will learn:

  • Why "secret cyborgs" using free AI tools with zero guardrails are a symptom of execution failure
  • How to move personas from static PDFs to interactive synthetic customers you can query
  • When to pause AI implementation because you're optimising the wrong work
  • Why Stage Gate committees systematically prune differentiation before launch
  • How product managers can defend customer pain points with veto authority
  • The connection between AI pilot mode and launching commoditised products

Keywords: AI implementation life science, synthetic customer persona, AI execution gap, Stage Gate product development, biotech marketing strategy, AI guardrails, product differentiation, crossing the chasm life science, voice of customer, AI literacy

Watch the full episode, subscribe for weekly insights, and visit strivenn.com for tools to close your own execution gaps.

Transcript

In this episode of A Splice of Life Science Marketing, Matt Wilkinson and Jasmine Gruia-Gray tackle two connected pressure points bleeding advantage from life science organisations: the widening gap between AI enthusiasm and actual deployment, and Stage Gate committees that systematically prune differentiation during development. Both problems share the same root - retiring risk by standardising rather than solving creatively.

The AI Execution Gap

Speaker: Matt Wilkinson

Matt, welcome to A Splice of Life Science Marketing. Your go-to show for life science marketing professionals in biotech, med tech and diagnostics. Join us for sharp strategic conversations that turn cutting edge insights into real world marketing advantage. I'm Matt Wilkinson.

Speaker: Jasmine Gruia-Gray

And I'm Jasmine Gruia-Gray. In each episode, we'll cut through the hype and complexity with practical plays you can use to earn trust, stand out in crowded categories and convert attention into momentum.

Speaker: Matt Wilkinson

Hi Jasmine, hello Matt and hello everybody. Today, we're examining two pressure points where most organisations are bleeding advantage. First, the widening gap between AI enthusiasm and actual deployment, and then the Stage Gate committees that systematically prune differentiation during development long before launch. Teams inherit a commodity product and wonder why positioning feels impossible. These aren't separate problems. They're symptoms of the same organisational pattern - retiring risk by standardising rather than solving creatively, whether it's defaulting to generic AI experiments or cutting awkward product features that address high value pain points. The mechanism is identical. Let's start today with where most companies are stuck right now - talking about AI transformation while their competitors execute it.

Speaker: Jasmine Gruia-Gray

What I really thought was important about your blog post, Matt, was that you call out the secret cyborgs - people in your organisation using free AI tools with zero guardrails because you haven't given them approved alternatives. That's not a future risk, that's happening right now in every life science company that banned AI tools without providing substitutes. And the gap between the 68% who are optimistic about AI and the 7% who are actual power users - that's not awareness, that's execution failure. But what stopped me cold was this insight: we're drowning in information but starving for absorption. AI can generate infinite content. The bottleneck is human understanding. That reframes the entire AI conversation from what we can automate to what actually helps us learn and decide. So your article isn't about AI capabilities. It's about why the gap between enthusiasm and execution keeps widening and what actually closes it. So I wanted to follow up with some questions. You identified three blockers: data readiness, AI literacy and incentive misalignment. But the data point that stands out is organisations taking two to six years to properly embed Salesforce. That's wild. If that's the baseline for a relatively simple system, what's different about the organisations that are breaking through with AI?

What Winning Organisations Do Differently

Speaker: Matt Wilkinson

Now it's really interesting, and the companies that I've worked with that are breaking through with AI are the ones that are using the tools at scale. They're creating spaces where people can really play with the tools, and they're really focusing on not just AI literacy, but sort of an AI culture - one where people collaborate together amongst themselves and with the AI. But they're ones that don't just treat AI as a magic bullet. They're ones that sort of look at it going, okay, these are the tasks that we can use AI to help us solve and these are the places where, really, we should be making sure that we've absolutely got human oversight, that we're working alongside the AI tools, rather than just handing everything over to them. And that's the big cultural challenge as well. Because one of the big concerns that you and I know from speaking, you know, from interviewing so many folks at ELRIG Drug Discovery 2025, was that the more people sort of talk about AI, there's a group of people that are very scared that AI is going to take their jobs. And I think it's going to take the tasks rather than jobs, at least in the first instance. And of course, the more tasks AI takes, maybe there is a need for fewer jobs, but we need to look at those tasks - where can AI help elevate what we do? And I think that's one of the things that the organisations that are winning are doing. They're the ones that are making AI easy to use, immediately available, and it's something that culturally they're adopting to try to do better and not just more.

Speaker: Jasmine Gruia-Gray

So I think what you're saying is it's a mindshift in not thinking about AI as your helpful copy paste tool, but thinking about it around how it can help you go deeper into a subject, how you can experiment with different questions and different ways of cutting data much more quickly than you can do on your own.

Speaker: Matt Wilkinson

Yeah, absolutely. And it's about looking at how can I really get to grips with more things, but also that there are tasks that AI really should now be helping us with, and we should be looking at how, as a creator, you know, whatever role that you're working in, how can I focus on creating a better end product that's more aligned with who I'm trying to communicate with, rather than just creating more, which I think is the great fallacy of where a lot of AI initially was going.

Persona as Agent, Not Artefact

Speaker: Jasmine Gruia-Gray

So you also talk about Persona AI, an agent that you created, as moving from persona as artefact to persona as agent. Can you walk us through what that actually looks like when someone's testing messaging or practising a sales pitch? What changes in the work?

Speaker: Matt Wilkinson

So one of the things that I know that you and I have discussed in the past is that there's that great, disheartening moment when you've been out and you've created these personas and they're just then left to rust on a hard drive somewhere. They're probably not even printed out, so they might just be a pretty PDF that people look at during a project and they never really use again. What this does in the first instance is to be able to take all of that information that you've gathered around your persona, and one of the ways that it allows us to do it is to actually create an average of even more members of our persona group, if you will, as human beings. It can be quite difficult to create a persona based off of more than a few examples of one customer, but if we know 10 or 20 LinkedIn profiles of who our persona really is, we can actually use deep research tools to create an aggregate of all of those into one average persona, and that gives us a lot of deep context. But then what it allows us to do is we can layer on interviews, feedback, all the sorts of information that we would need to really turn that persona into something really, really detailed, but that gives so much more information than a human could ever hold. One of the challenges of using personas is that you're trying to move yourself from standing in your shoes to standing in your persona's shoes and then writing content or creating something that works for the persona. Maybe you're doing new product development or creating a new website - whatever it is, you're trying to do something that really resonates with that persona. Now, by putting that all into essentially a retrieval augmented generative AI system, and using that as the context from which you create a synthetic customer, it allows you to interact with that synthetic customer in a way that they're then able to essentially embody all of that research that you've done. So in the first instance, you could use that to help you create content to test messaging, very much in a chatbot style, but you can also do it in some really fun ways. So there are now tools like Yoodli, where you can create personas based on that research and personality insights, and then have a conversation with those synthetic customers. Well, I'd set those up for, shall we say, a sales role play, or maybe a customer visit. So it doesn't really matter what the scenario is - anytime that you wanted to be able to interact with your persona, there are ways to do it. At the moment, there aren't tools that allow you to sort of encompass every single scenario straight off the bat. You have to sort of pick and choose a little bit, but there are absolutely ways to really, really be able to get under the skin of your persona and really be able to use them in ways that just a few years ago were absolutely impossible.

Speaker: Jasmine Gruia-Gray

Again, it comes down to how AI can help augment your thinking. In this particular case, with a synthetic customer, brainstorming with that synthetic customer - would you rather see the marketing message this way or that way? Which one resonates more with you? Which pain points are more acute with you - this one, that one or the third one? So it's a fabulous way to move away from copying and pasting and just taking what AI says de facto and instead augment your thinking and experiment with different outputs.

Speaker: Matt Wilkinson

Yeah, absolutely. And you can ask those personas to respond with the personality that you believe that persona would have. So along with the wants and needs and how they get measured in their job, you can also say, hey, well, this person, they're working in quality control, maybe they're a little bit more detail oriented than the creative marketer. And so you're really able to then get people engaged with that chatbot and get the responses back really aligned to that personality type that you input into that persona. And that allows you to help everything from R and D all the way through to sales conversations, and of course, with a big bunch of help for marketers in the middle as well.

When to Pause AI Implementation

Speaker: Jasmine Gruia-Gray

So in these last couple of minutes, we've been talking about ideas on how to implement AI, and of course, the guardrails that are important. Let's look at the flip side. When should a sales or marketing leader actually pause AI implementation because they're optimising the wrong work?

Speaker: Matt Wilkinson

Well, the short answer is when they're using AI to scale work that shouldn't exist in the first place. AI makes it dangerously easy to produce more - whether that's more content, more campaigns, more of whatever you want - without ever asking if the strategy underneath is broken. And one of the things that's really important is that with AI, of course, we can scale out to multiple people with multiple personas. We can be everything to anybody. We can personalise at such scale that we can appear as any of our potential customers might want us to be. But if we do that, and we personalise in the wrong way, and we sort of become this amorphous blob, we no longer stand for anything. And the whole point of marketing is to stand for something and to know who you're for, and perhaps more importantly, who you're not for. So it's really important, in my mind, to really make sure that you're not just doing more of the same, but actually you're using AI to help you test, measure, and then be able to do better at what makes a difference.

Speaker: Jasmine Gruia-Gray

I think it almost comes down to the saying "just because you can doesn't mean you should," and really understanding what problems you're solving by using AI is super important. So where have you seen teams get this wrong? Where do you think people cede judgment to AI where they shouldn't?

Speaker: Matt Wilkinson

I think it's really around not making sure that they've got humans in the loop. There are some hilarious - I mean, I say hilarious - some really worrying examples that make the news where people are looking at putting things out that include AI hallucinations. Some pretty big decisions were made by UK Police recently based on some AI hallucinations. And so there's some examples where you've got to be really careful for that. But I think the more sensible approach is making sure that you're not quoting false case law if you're going into court, that you're not quoting false hallucinated references. So really being able to make sure that you're fact checking everything, but also that you're not ceding that emotional judgment and taste. So we really have to make sure that we know what's important. And you know, we make sure that we're keeping human judgment in the loop at every stage of what we're doing. What we really want to be doing is making sure that we're communicating to the customer. And so we have to make sure that the content and the way that we show up to the customer is true, because still, at this point, the customer is making the final decision. And so we have to focus on the human still, with knowing that the AI is doing a lot of help, in some cases now along the buying journey.

Speaker: Jasmine Gruia-Gray

What I think you're also saying is AI may be very good at the logic side of how we think, but not at all on the emotional side or on the ego side. And if for no other reasons - and those are very important reasons - the human should be staying in the loop to inject emotion and to make sure that the outputs are not soulless, as I've many a time told my LLMs.

Speaker: Matt Wilkinson

Yeah, we absolutely need to make sure that those outputs aren't soulless, and I think that's one of the things that breaks me about that execution gap, and why closing it matters more now than ever, because so many more people are using AI now, and so there is so much more AI slop out there. Most teams discover their AI strategy collides with deeper problems - whether that's the data, whether that's culture, whatever it is - there are a lot of problems that people discover when they're going through AI implementations. And so organisations are like icebergs, and trying to turn them around is incredibly difficult. We really have to play that game of yes, okay, souls and small teams can scale quickly with AI, can do incredible things with AI, but big organisations are still full of people. And I think everybody in an organisation pointing the same direction and learning at similar levels - that's a big ask. And so I think that over the next few years, one of the big things that will be really challenging is do the learning and development teams really make sure that not only are their current teams trained up to use these tools in a landscape that is rapidly changing, but also that they're then moving into how do they hire correctly? And there's a whole thing about training students that I'm sure we don't have time to get into today.

Stage Gate Committees and Differentiation

Speaker: Jasmine Gruia-Gray

Fantastic. Well, thank you for this, Matt.

Speaker: Matt Wilkinson

Now moving on to your article this week, and you talked about Stage Gate committees pruning products the way that amateur gardeners prune trees - they're cutting branches that look risky, rather than solving for the harvest. And I love this, because trying to look after orchids and some bonsai, it's always been difficult to know when to prune, how to look after things correctly. And so you use something called a Pomona metaphor, and you looked at healthy looking branches, and people that would look at removing healthy looking branches and saving the awkward ones, because understanding which branches would actually produce fruit. Unfortunately, Stage Gate committees are often doing the opposite. They're retiring risk by standardising to proven approaches, which systemically destroys differentiation before launch teams ever see the product. What I thought was really brutal about your analysis is the mechanism - now each functional group optimises for risk reduction within their own domain. Regulatory wants proven formulations. Manufacturing wants conventional processes. Sales wants comfortable positioning. And nobody's really evaluated on whether the cumulative effect turns your differentiated innovation back into a commodity. And I wonder whether this is really why so many new products fail, or one of the big reasons why so many products feel like they're completely undifferentiated. And you know, when you're going in to try and help organisations to launch a product, actually finding a unique position is so tough. So I've got a few questions on this. So you argue that differentiation is decided upstream during feasibility and development, not downstream at launch. But most of us, especially those of us that work in marketing, don't get visibility into those Stage Gate reviews until it's too late. So what's the earliest signal that differentiation is being pruned, and what should trigger alarm and where should those alarm bells be going off?

Speaker: Jasmine Gruia-Gray

I really think that product marketing and product managers need to be the champions of the customers for whom the product is being developed. The alarm bells should be going off when something - some feature, some advantage - is being taken away for whatever meaningful risk, and the product manager thinks, "Oh, that person, Dr Smith, I received voice from him about a pain point he has. That feature would have addressed that pain point. And the business case I as a product manager developed was predicated on that main feature." Those are some of the signals and alarm bells that should be going off in that product manager's mind. And that's the importance of sharing Voice of the Customer across the whole core team, whether they're in R and D, whether they're in regulatory, whatever function they're in, so that others on the team can also have those alarm bells as they're thinking they're retiring risk.

Speaker: Matt Wilkinson

That's really interesting. So you know when you're looking at retiring risks, you identified three branches that you strongly recommend shouldn't be pruned: severe customer pain points, economic differentiation that changed unit economics, and workflow integration that reduces adoption friction. How should a product manager actually quantify these before you start going into the feasibility gate processes so they have some negotiating leverage to make sure those parts of the product definition don't get pruned?

Speaker: Jasmine Gruia-Gray

I think it's again important to connect the dots to the voice of the customer, and not sort of generic, you know, "let's make it easier" type of genericising it, but rather be specific. "Let's eliminate these three manual steps that cause 15% of early stage researchers to abandon a protocol." Yes, you're making it easier, but you're being very concrete and very factual on what that ease is going to mean back to the customer and how that connects to the pain points that they articulated to you, as well as within the workflow. So it's a combination of different factors - being specific. And really, I don't think that committees can as easily argue against ROI calculations and against these types of facts within a workflow.

Speaker: Matt Wilkinson

Nice. So on a more practical note, you recommend product management should have veto authority over this sort of "prove it" pruning, the same way regulatory or manufacturing can veto on compliance or feasibility. That's a big ask. How do you make that case to leadership without sounding like you're just defending, you know, pet features?

Speaker: Jasmine Gruia-Gray

It comes back to what I had said earlier. Product managers are the guardians of the customer and the future customer. And what I mean by that is they need to be able to articulate what the risks are in terms of the customer's pain points, and in terms of the business case. It's okay to prioritise certain features and build a business case based on those priorities, but if you, as a product manager, don't have the veto authority to keep that business case intact, then you end up at the end of the line when you're getting ready to launch the product where it's just me-too, and then you don't end up realising the benefits of that business case. So it's all a really important circle that the product manager is responsible and accountable for, and should also therefore have the veto authority when these really important, prioritised features are running the risk of being taken out.

Speaker: Matt Wilkinson

Yeah, really, really a lot of sense. One thing that I was really pleased to see in your article was that you touched on Geoffrey Moore's Crossing the Chasm framework, which I'm a big fan of, and you talked about how Stage Gates optimise for mainstream acceptance before establishing the beachhead market. And so does that mean that very often we're optimising for the mainstream and the people that are going to be buying a disruptive innovation once it's accepted, but we're neglecting those early adopters and innovators? Where does this break down? When should committees actually prune aggressively, even if it risks some of those differentiation points?

Speaker: Jasmine Gruia-Gray

When the differentiation solves a pain point - I mean, we keep circling around the same themes - the pain point that customers aren't actively seeking solutions for. Those are ones that should be pruned away. When the complexity exceeds the value that's being created - that's what should be pruned away. And in part, that's the value of alpha testing, of having an early prototype that you can go back to your group of VOC and asking their opinion and watching them use that prototype, or watching their reaction to you demonstrating the prototype. All of that can then help you further prune or realise that you have hit the right mark and that you have prioritised the right key features. And I think it's important that you understand from the get-go who is your target audience. Is it those early innovators, or is it the middle majority? Is it a line extension that's, you know, really focused on your middle majority? And that way you can more easily weigh where your features should be prioritised.

Speaker: Matt Wilkinson

Fantastic. Yeah. And I think that making sure that you've got that beachhead and you've really stabilised it before going on is such an important point that, especially in our industry, you've got to prove that it works. And I think that's a really, really strong takeaway - but you've got to prove that you're meeting those needs for customers. So love the framework.

Your Move This Week

Speaker: Jasmine Gruia-Gray

Thank you for that. Here's what connects these conversations. Execution gaps compound. The AI implementation gap and the differentiation pruning problem are both symptoms of organisations retiring risk by standardising rather than solving creatively. When you're stuck in AI pilot mode or launching commoditised products, the pattern's the same. Each function optimises within their domain. Nobody's accountable for strategic advantage, and by the time you see the problem, the critical decisions have already been made. Your move this week that Matt and I are challenging you to is: pick one place where your team is optimising for comfort instead of for competitive advantage. If it's AI, identify one high frequency task and solve it, rather than strategise it. If it's product development or product management, attend one gate review and watch which features get de-scoped. The differentiation you lose upstream determines the positioning battles you fight at launch. Thank you so much, Matt, this was a really interesting conversation.

Speaker: Matt Wilkinson

Always is. And thank you, Jasmine, that was great to see you again.

Speaker: Jasmine Gruia-Gray

Thank you all for attending another episode of A Splice of Life Science Marketing, and we look forward to seeing you again.

Speaker: Matt Wilkinson

Soon. Bye bye. Thank you for listening to A Splice of Life Science Marketing. We hope you enjoyed the episode.

Speaker: Jasmine Gruia-Gray

If this conversation helped you, the single biggest way you can support the show is to subscribe and leave a review on YouTube, Spotify or Apple Podcasts. We'd really appreciate it, and it makes a huge difference.

Speaker: Matt Wilkinson

You can find out more about us and the topics we discuss at strivenn.com or on LinkedIn. Thank you so much for listening. We hope to see you next time.

Q&A

How do I identify if my team has "secret cyborgs" using unapproved AI tools?

Run a simple survey asking what tools people use daily - include AI options without judgment. Watch for inconsistent output quality or speed between team members. Check browser extensions during screen shares. The goal is not to punish but to surface hidden productivity so you can provide approved alternatives with proper guardrails.

What is the fastest way to test synthetic customer personas without a big budget?

Start with one persona. Gather five to ten LinkedIn profiles of real customers fitting that type. Use Claude or ChatGPT to synthesise common patterns into a detailed profile. Then create a custom GPT or Claude project with that context and test three marketing messages against it. Total cost: under fifty pounds and a few hours.

How do I know if I am scaling work that should not exist?

Ask: "If we stopped this activity tomorrow, would anyone notice within 30 days?" If not, you are scaling noise. Check whether the work ladders to a measurable outcome tied to pipeline or revenue. Content that fills a calendar but generates no engagement or leads is the first candidate to cut before you automate it.

What single question should I ask at my next Stage Gate review?

Ask: "Which customer pain point does this feature removal affect, and how does that change our business case?" This forces the committee to connect de-scoping decisions back to Voice of the Customer evidence. If nobody can answer, the decision is being made on internal comfort, not market reality.

How can a marketer get earlier visibility into product development decisions?

Request observer status at feasibility gate reviews - position it as ensuring launch readiness. Offer to present Voice of the Customer data at each gate, framing it as reducing commercial risk. Once you demonstrate value by catching a differentiation threat early, you earn a permanent seat at the table.

Topic: