logo Contact
Burn the playbook podcast

Strivenn Thinking

Pull up a seat at our
digital campfire
where story, strategy,
and AI spark
new possibilities
for sharper brands
and smarter teams.

 

Marketing Strategy

Burn the Playbook: What Life Science Leaders Get Wrong About AI

By Matt Wilkinson

Your team has ChatGPT licenses. Your competitors are talking about AI agents. Your board wants an AI strategy.

 

And you're still trying to figure out whether this is a real shift or just expensive theatre.

I sat down with Marc Crosby on his Burn the Playbook podcast to work through this exact tension. The conversation kept circling back to a single uncomfortable truth: the gap between AI enthusiasm and AI execution is widening, not closing. And that gap is where competitive advantage lives.

 

The shift from toy to tool

When ChatGPT launched three years ago, most of us treated it like a clever spell checker. Something to clean up emails. Maybe draft a social post.

 

Then the reasoning models arrived.

 

That was the inflection point. Not because the technology suddenly worked, but because we could finally see the scaffolding underneath. We could understand how to build repeatable workflows instead of just asking questions and hoping for useful answers.

 

The experimental phase served its purpose. Now we need execution.

 


What's actually blocking adoption

I've spent the past year working with life science commercial teams on AI readiness. The research we conducted at ELRIG Drug Discovery 2025 revealed something striking: 68% of exhibitors were optimistic about AI, but only 7% could be classified as power users.

 

That's not an awareness problem. That's an execution gap.

 

Three blockers keep showing up:

 

Data readiness. Salesforce has been around for 26 years. Many organisations still struggle to get their CRM used properly. If it took a quarter century to embed a relatively simple system, AI represents a far steeper climb.

 

AI literacy. Giving people licenses to Copilot doesn't mean they'll use it. Banning it doesn't mean they won't. The rise of "secret cyborgs" - people using free tools with no guardrails because their organisation hasn't provided approved alternatives - is a genuine risk.

 

Incentive misalignment. When you've got a revenue target to hit this quarter, AI training feels like a luxury. Unless there's immediate pressure to adopt and clear early wins, people default to what they know.

 

The organisations that break through aren't the ones with the biggest budgets. They're the ones that make AI easy to use and immediately valuable.

 


PersonaAI: from fiction to function

Traditional persona development follows a familiar pattern. Get the team in a room. Review a few LinkedIn profiles. Build consensus around who you're targeting. Create a two-page document that gets filed away.

 

That process still has value. The alignment matters. But what comes next is usually fiction built on thin data.

 

We can do better now.

 

With deep research capabilities, we can analyse 5, 10, 20 LinkedIn profiles instead of one or two. We can supplement that with voice-of-customer interviews. We can keep all that context instead of condensing it down to something "usable" by human standards.

 

Then we turn those personas into synthetic customers. Not static PDFs, but interactive agents you can actually use.

 

Test messaging. Model buying group dynamics. Practice sales scenarios. The persona becomes a tool, not a document.

 

I've been using tools like Yoodli for role-play preparation. You create the scenario, practice the pitch, get feedback on whether you're hitting your key questions. It's humbling how much difference two or three practice runs make.

 

The shift from persona as artifact to persona as agent is subtle but significant. One sits in a drawer. The other shapes daily decisions.

 


The tools that matter

Everyone asks about my favourite AI tool. The honest answer: it depends on the job.

 

For deep thinking and complex tasks, Claude edges ahead. For research and synthesis, I still love ChatGPT. For personality analysis and sales preparation, Humantic AI is exceptional. For learning and absorption, Notebook LM is unmatched.

 

Notebook LM deserves special attention. Most people think of it as "the podcast tool". That undersells it dramatically. The real value is turning dense information into multiple learning formats. Podcasts, yes. But also quizzes, videos, different explanatory approaches. When you're trying to absorb complex material, having multiple angles matters.

 

And this points to something larger: we're drowning in information but starving for absorption. AI can generate infinite content. The bottleneck is human understanding. Tools that help us learn, not just access, will win.

 

Moments over funnels

The traditional marketing funnel was never really about how customers buy. It was about how marketers think about advertising copy.

 

Now, with AI mediating more purchase decisions, the funnel is even less relevant. Customers might never visit your site. They might complete a purchase through a chatbot. They might form their entire opinion of your product based on what Claude or ChatGPT tells them.

 

This isn't hypothetical. Walmart is using chatbots to negotiate with suppliers. The question isn't whether AI will reshape commercial interactions. It's whether you'll be present when those interactions happen.

 

That requires a shift from funnel thinking to moment thinking. Where are the conversations happening? What triggers a search? What information needs to be available when someone asks an AI for a recommendation?

 

You can't optimise the funnel if you're not in it.

 

The human stays in the loop

For all my enthusiasm about AI capabilities, I'm adamant about one thing: humans must remain in the creative loop.

 

Think of it this way. In the old world, if I wanted to compose a piece of music, I'd need to learn every instrument, conduct the orchestra, engineer the sound. Now AI can play all the instruments. But I still need to be the composer. And I still need to be the conductor. And I definitely need to be the sound engineer.

 

The emotional sense-checking, the strategic framing, the narrative arc - these remain distinctly human responsibilities. Without them, you get AI slop. Generic content that says nothing to no one.

 

The organisations that win with AI won't be the ones that automate everything. They'll be the ones that augment the right things while keeping human judgment where it matters most.

 

What to burn, what to build

Marc asked me what habit sales and marketing teams should burn in 2026. My answer: short-term thinking.

 

I know there are pressures to hit quarterly targets. I know performance marketing has trained us to optimise for immediate ROI. But this, quarter by quarter, obsession is killing long-term brand building.

 

If we keep optimising for the next three months, we'll disappear in the long term.

 

The companies that matter in five years aren't the ones squeezing out an extra percentage point of conversion today. They're the ones building brands, creating experiences, and establishing authority that compounds over time.

 

AI makes this tension worse, not better. It's tempting to use AI to crank out more content, more campaigns, more everything. But more of the same isn't strategy.

 

The question isn't whether AI can help you do more. It's whether you're doing the right things in the first place.

 

Your move

Most life science organisations know AI matters. Few have figured out how to deploy it effectively. The gap between optimism and execution represents both risk and opportunity.

 

You don't need perfection. You need direction. You don't need to solve every AI challenge at once. You need to identify where AI creates immediate value and build from there.

 

The technology exists. The tools work. The only question is whether you're ready to move from experiment to execution.

 

Because your competitors already are.