logo BOOK A GROWTH CONSULTATION
logo header splice of life rectangle 2

 

AI Discoverability

S2 Ep12: Why Life Science Product Managers Need an AI Delegation System

By Matt Wilkinson

Life science marketers have a structural AI citation advantage they're not using - here's how to capture it.

 

Shownotes

You reformatted a competitive comparison slide for sales. It took 40 minutes. They looked at it for 90 seconds. That gap -- between feeling productive and being strategic -- is exactly what this episode is about.

 

Jasmine Gruia-Gray joins Matt Wilkinson to interrogate the DRAG framework (Drafting, Research, Analysis, Grunt work) -- an AI delegation system designed for life science product managers who are drowning in admin they never chose to take on.

Who this is for: Life science product managers, commercial leads, and marketers in biotech and pharma who suspect their calendar is full but their strategic output is thin.

 

Matt and Jasmine cover the completion bias trap that keeps PMs stuck in low-value work, the honest math behind the 15-to-20-hour time recovery claim, and the real risk of AI delegation -- not the obvious one you expect, but the subtler cognitive offloading problem that could quietly blunt your anomaly detection over time.

 

The key idea: The DRAG framework is an AI delegation system, not a time-saving shortcut -- the reclaimed hours only matter if you spend them on customer discovery.

 

What you will learn:

  • What completion bias is and why it explains most of the PM admin problem -- backed by HBR data on how CEOs and PMs waste structured work time
  • How the AIM protocol (Actor, Input, Mission) solves the blank page problem for AI-assisted drafting in technically specialised life science contexts
  • Why the 15-to-20-hour recovery figure is a ceiling you build toward, not a day-one promise -- and what week one actually looks like
  • The difference between AI synthesis and human interpretation, and why conflating the two is where strategic decay begins
  • How to use DRAG to change stakeholder expectations through performance, not by waiting for organisational redesign
  • The one test that tells you whether your AI delegation has crossed into cognitive surrender: if you spend more time reviewing AI output than talking to customers, you have gone too far

Keywords: life science product management, AI delegation, DRAG framework, completion bias, AI productivity, life science marketing, biotech product manager, AI tools life sciences, strategic time management, cognitive offloading, product management framework, AI workflow

 

If this conversation challenged how you think about where your attention goes, subscribe and share it with a PM who is still reformatting slides at 5pm. Visit strivenn.com to explore AI-enabled commercial strategy for life science companies.

 

In this episode, Matt Wilkinson and Jasmine Gruia-Gray debate the DRAG framework -- an AI delegation model for life science product managers. They interrogate the time-saving claims, the cognitive risks of AI synthesis, and the organisational limits of any individual-level fix. What follows is the full transcript of their conversation.

Introduction and the Admin Gap

Speaker: Matt Wilkinson [00:00:01]

Hi Jasmine!

Speaker: Jasmine [00:00:03]

Hello, Matt, how are you doing?

Speaker: Matt Wilkinson [00:00:05]

Good, thank you and you.

Speaker: Jasmine [00:00:07]

It's a little bit of a gray day here in Virginia, but the cherry blossoms are blooming, so that's fun.

Speaker: Matt Wilkinson [00:00:14]

Nice, I'm just waiting for our cherry blossom to bloom, but it's a very wet and not particularly appealing day here in the south coast of the UK. Yeah, exactly. I'm hoping it's going to be nice by the weekend.

Speaker: Jasmine [00:00:25]

Welcome to spring.

Speaker: Matt Wilkinson [00:00:33]

Let's get started with talking about a blog that you've just written. And I want you to imagine your last Monday morning, the calendar was full. The task list was somewhere north of 50 items. Most of them tagged urgent. You opened your laptop and within the first hour you reformatted a competitive comparison slide that sales had asked for. It took you 40 minutes. They glanced at it for 90 seconds. That's the gap this episode is about -- the gap between feeling productive and actually being strategic.

Today, let's discuss this DRAG framework that you discussed, which stands for Drafting, Research, Analysis, and Grunt work. In it, you argue that most of this is an AI delegation system for life science product managers and that they actually need it. I think it's a strong start that maybe doesn't even go far enough. We're going to find out which of us is right.

Speaker: Jasmine [00:01:43]

Do it.

The Individual Fix Versus the Systemic Problem

Speaker: Matt Wilkinson [00:01:44]

Excellent. So you're asking product managers to fix a behaviour problem on their own. But if someone is drowning in admin work, isn't that a management failure, not a product management behaviour failure? Why does the individual carry the entire fix?

Speaker: Jasmine [00:02:06]

Well, it's because waiting for your organisation to redesign itself is not a strategy. I've watched so many product managers in life science companies sit in those FYI meetings for three years, waiting for someone above them to notice the problem and nothing changes. Completion bias is real. A Harvard Business Review study found CEOs spend 72% of their total work time in meetings, averaging 37 meetings per week. Product managers compound that pattern because their brains treat formatting and internal status slides the same as designing a pricing model. Same dopamine hits, same sense of progress, and this isn't unique to product managers.

The admin work wins because it's completable right now. The DRAG framework doesn't require an organisational change. You pick five tasks on your current list. You route the drafting and the grunt work to AI, and you use the reclaimed hours for important stuff like customer discovery. That's within your control today. You don't need approval. You don't need a new job title. You need to make a different decision about where your attention goes.

Speaker: Matt Wilkinson [00:03:42]

It's hard to argue with completion bias. It's real. I fall into that trap myself. But the framing that product managers can simply choose to protect their strategic time -- I think that often ignores what actually fills that admin backlog. You know, the most destructive administrative work isn't self-assigned. It comes from above. A director decides the PM is the safest person to reformat a validation protocol. A VP sends an email at 5pm expecting a competitive slide by the morning. A cross-functional team adds the PM to a recurring meeting because removing them feels politically awful.

I agree that DRAG gives PMs a useful filter for the work they control, but it doesn't give them a way to decline the work that gets pushed onto them. And in many life science organisations, especially in pharma, the culture around PM availability is deeply embedded. You can set a delegation boundary for yourself. You can't set it for your director. The framework seems valuable, but presenting it as enough without addressing role clarity or stakeholder management seems like it's setting the PM up to feel like they failed the framework when the real problem is the system.

Changing Stakeholder Expectations Through Performance

Speaker: Jasmine [00:05:07]

So I think the stakeholder problem is real. I'm not dismissing it. But here's what I've actually seen work. When you respond to that 5pm competitive slide request with, "Look, I'll have AI put the initial research in by the end of the day. Then I'll add strategic context on positioning gaps by tomorrow morning." You've done something important there.

You've committed to a faster, more thorough output than you used to deliver manually. You've routed the mechanical work to AI per the DRAG framework, and you've demonstrated that your DRAG-delegated process outperforms your old manual one, at least from a time perspective. And stakeholders care about outcomes as much as they care about speed.

The more consistently you deliver better results faster with this approach, the more credibility you build to protect the time that matters. And that's not waiting for the organisation to change. That's changing the organisation's expectations through performance. The individual fixes the entry point to the systematic fix, not a substitute for it.

Interrogating the Time-Saving Claim

Speaker: Matt Wilkinson [00:06:34]

That's fair. But the blog promises 15 to 20 hours per week could be freed up using it. That's nearly half a standard work week. I mean, I know that most product managers work way over the standard working week, but in life sciences where AI outputs need serious domain curation before they're usable, isn't that figure setting PMs up to feel like failures when they only get six?

Speaker: Jasmine [00:07:03]

Yeah, so the number is based on applying DRAG consistently across all four of those categories, not just the easy ones. And I'll defend it because I've lived it. For research specifically -- deep competitive intelligence on a new ELISA kit launch, on market sizing for spatial biology platform, on reimbursement pathway mapping for a companion diagnostic -- these tasks used to consume days upon days, if not close to weeks. AI research tools now fire hundreds of secondary queries, consolidate results and deliver 80% of what you need in a matter of minutes. I'm not spending days or weeks anymore. And that maths compounds quickly.

So for an analysis perspective, AI reading beta feedback from 12 early adopters and identifying that eight of them mentioned workflow integration, unprompted -- that synthesis used to take lots of time. Now it takes 20 minutes, maybe, of review. The AI spots the pattern. That's what AI is really good at. And I interpret what it means for the next phase gate decision. So the human is definitely still in the loop. And the AIM protocol -- which is Actor, Input and Mission -- for drafting, handles the blank page problem. Act in a specific role, provide the input context, state the mission. The output is rough, but editing is faster than creating, at least for me. My brain engages completely differently when I have something to react to versus starting from a blank page.

Speaker: Matt Wilkinson [00:09:07]

I think that's fair. And I can definitely see where some of these tasks are absolute game changers -- in analysing big sets of data, particularly trying to conduct a thematic analysis of transcripts from voice of customer interviews. AI is fantastic at that. But I do want to engage with the maths, honestly, because as a recovering scientist and somebody that's been measured on how accurate or not I've been, making sure the framework holds up is, I think, crucial. And I kind of feel that's where it's potentially most vulnerable. The 15 to 20 hours figure assumes a task profile where AI can take a meaningful first pass without extensive setup. For generic business tasks, that's fine. But life sciences PM work is technically specialised in ways that change the economics.

Positioning a 48-colour flow cytometry panel for immunology researchers is not a task where a generic AI draft is 80% of the way there without significant context loading. Getting the AIM protocol right for that task requires domain knowledge, careful specification of the competitive landscape, and review against customer conversations AI never had access to. Now that's cognitive work. It's not zero. And even if you've got that, it can take time -- both the capability to do that, the knowledge to do it, and also making sure the data is in the right place. I'm not saying that this isn't going to take time. It clearly would. I guess I'm just challenging whether the 15 to 20 hours is realistic.

Speaker: Jasmine [00:10:55]

Yeah, the fluency point is a fair one. Week one of DRAG is slower than week four because you're building the AIM protocols, the context libraries, the reusable AI workflows. That set-up cost is real, as well as the cost to your own mental state and familiarity of working with DRAG.

But here's the thing -- the FAQ in the blog addresses recurring requests explicitly. Build the AI workflow once and then reuse it. Competitive updates, FAQ additions, technical translations. Once those workflows exist, they compound. The PM who spent three months building DRAG infrastructure isn't in the same position as the one who tried it twice. So yes, six to eight hours in the early stages is probably more accurate. But the framework's promise is about what it becomes with consistent application, not what it delivers on day one. The honest version of the claim is: 15 to 20 hours is the ceiling you're building toward. That's worth saying plainly. And I think that's accurate.

Cognitive Offloading and the Risk of Strategic Decay

Speaker: Matt Wilkinson [00:12:21]

That's fair enough. So here's an uncomfortable question then. If you outsource the synthesis of the data, do you gradually lose the ability to spot the anomaly that doesn't fit the pattern? The one comment from a key opinion leader that signals a bigger problem -- do you miss those outliers? Is DRAG quietly making PMs less sharp?

Speaker: Jasmine [00:12:47]

Yeah, that's a huge concern, whether you call it cognitive offloading or -- there was a recent paper by some Wharton professors that are now calling it cognitive surrender. The concern assumes synthesis and interpretation are the same cognitive activity, but they're not.

When AI analyses beta feedback from 12 sites and surfaces that eight of them mentioned workflow integration unprompted, I still read that output. I still form my own conclusions about what those patterns mean for the next phase gate decision. When AI drafts a positioning document, I'm rewriting some large percentage of it based on customer conversations AI never heard. The mechanical assembly of information and the strategic interpretation of what it means are very different. The test the blog gives is the right one. If you're spending more time reviewing AI output than talking to customers, you've crossed the line. DRAG should free 15 to 20 hours at maximum per week. Those hours should go into direct customer interactions -- like alpha site visits or discovery calls, pricing conversations with strategic accounts, for example. More customer exposure sharpens judgement. It doesn't dull it. The risk of strategic decay is real, but it's a risk of how you use the free time, not of the framework itself.

Speaker: Matt Wilkinson [00:14:38]

That's a sound philosophy there, but I want to name a subtler version of the problem that the framework doesn't necessarily fully address. You know, pattern recognition in technical markets comes partly from doing the synthesis yourself. And when you manually read beta feedback from 12 sites, you don't just extract the themes that AI surfaces -- you notice the tone, you catch the comment from the PI at a top-10 research institution that's phrased as mild concern but actually signals a serious workflow problem. You connect it to a conversation from maybe three months ago at the last conference you attended. AI analysis surfaces themes across datasets, and it's brilliant at doing that, but it doesn't necessarily pick up that big picture analysis.

And so I think anomaly detection comes from domain expertise, and it's something that we really need to -- if we're using this sort of approach, we really need to make sure that we're staying rooted in the domain and making sure that we're practising that. So I think it's about a balance of practising there. I'm definitely not saying avoid it. I think it's a really helpful model. I think we just have to be really intentional about which analytical tasks we fully delegate versus which ones we use AI to assist with. After all, we know that using Excel and spreadsheets has been a huge time saver for many tasks that trying to do things by hand would just not make sense to do anymore. So these are definitely tools we should be adopting. Please don't get me wrong there. I think it's just a case of making sure that we're not offloading the wrong things.

Speaker: Jasmine [00:16:24]

Yeah, so the anomaly detection point is a really good one. And you're right that AI surfaces themes -- it doesn't always surface the outlier that may matter most. And that's why it's so important to keep the human in the loop, as we've always been saying, and as many, many other experts say. DRAG isn't a prescription to stop reading your data. It's a prescription to stop manually assembling it before you read it. The AI synthesises, but you interrogate. If I get an AI analysis of beta feedback and something feels thin, I definitely go back to the source comments. That option is always available.

The beta site PI example we've described is exactly the kind of signal that a good PM catches during the interrogation pass, not the assembly pass. The assembly was never where that judgement lived. So the question isn't whether DRAG risks your anomaly detection. It's whether you're disciplined enough to interrogate the AI synthesis, rather than accepting it as complete -- which is sort of going into that territory of cognitive offloading and surrendering if you just think it's complete. That's a reasonable bar and it's not a fatal flaw in the framework.

Conclusion -- Drag as a Thinking Discipline

Speaker: Matt Wilkinson [00:18:00]

Okay, so I'm convinced and I think that we don't want to conflate the framework with some of the challenges that AI is bringing into the way that we think and operate anyway. It definitely feels that DRAG is a really sensible first layer of solving this challenge. And it feels like it gives PMs a systematic method for routing mechanical work to AI and reclaiming the cognitive bandwidth to do higher value tasks.

I guess we just have to be really, really disciplined about protecting those reclaimed hours before the admin backlog refills and making sure that we're doing enough interrogation of AI outputs that we don't outsource our own anomaly detection as part of that process.

Speaker: Jasmine [00:18:49]

Yeah, that's totally fair. DRAG works when you treat it as a thinking discipline, not just a time-saving tool. The delegation boundary doesn't maintain itself. You have to build the workflows, protect the calendar blocks, and stay close enough to your data to catch the signal AI misses. If you do those things, up to 15 to 20 hours is real. If you don't, you just processed more admin faster.

Speaker: Matt Wilkinson [00:19:24]

Yeah, that's fair. And I think the most useful test from the blog -- if you're spending more time reviewing AI output than talking to customers, you've probably crossed the wrong line.

Speaker: Jasmine [00:19:35]

Right, right. Well, this has been tons of fun. Thanks again, Matt.

Speaker: Matt Wilkinson [00:19:40]

And you too, Jasmine. Look forward to speaking again on the next one.

Speaker: Jasmine [00:19:44]

Sounds good. Bye for now.

Q&A

I recognise the completion bias problem in my own week. Where do I actually start with DRAG without it becoming another project that never gets done?

Pick one recurring task this week -- a competitive update, a FAQ addition, or a technical translation you write from scratch every time. Build a single AIM protocol for it: define the Actor (what role AI should play), the Input (what context it needs), and the Mission (what output you want). Run it once. Review the output against what you would have written manually. If it saves you 30 minutes, you have your proof of concept. Expand from there. One workflow, one week. That is how the infrastructure gets built without becoming a project.

I am worried that if I delegate synthesis to AI I will stop noticing the signals that matter most. How do I protect that?

Separate assembly from interrogation deliberately. Let AI consolidate the data, but schedule a fixed block -- even 20 minutes -- where you read the source material yourself before you accept the AI summary as complete. Treat AI output as a first draft of your interpretation, not the interpretation itself. If something feels thin or surprising in the AI analysis, go back to the raw source. The discipline is in the interrogation pass, not in doing the assembly manually.

My director sends me competitive slides to reformat at 5pm with a morning deadline. DRAG does not fix that. What does?

DRAG does not eliminate top-down admin, but it changes your response to it. When the 5pm request lands, reply with a faster and more thorough output than you used to deliver: AI handles the initial structure by end of day, you add positioning context by morning. You are not declining the work -- you are changing what it looks like. Do that consistently and stakeholders begin to adjust their expectations of how you work, and how quickly you deliver. The individual changes the entry point; the system follows performance over time.

Is the 15-to-20-hour figure actually achievable in life sciences, or is it just marketing?

It is a ceiling you build toward, not a day-one result. Week one of DRAG is slower than week four because you are still building context libraries and reusable workflows. Six to eight hours recovered is a more honest early-stage figure. The compounding comes from recurring task types -- competitive updates, technical translations, VOC synthesis -- where you build the workflow once and reuse it. Life sciences adds domain-loading overhead that generic business tasks do not carry, so the ramp is slower. But the ceiling is real if you invest in the infrastructure consistently over three to four months.

How do I know if I have crossed into cognitive surrender with AI tools?

Apply the test from the episode directly: if you are spending more time reviewing AI output than talking to customers, you have gone too far. Practically, check your calendar for the last two weeks. Count hours in discovery calls, alpha site visits, and pricing conversations versus hours reviewing AI-generated content. If the ratio is inverted, DRAG has become a faster admin loop rather than a strategic lever. The reclaimed hours have one job -- getting you closer to the people whose problems you are supposed to be solving.