logo BOOK A GROWTH CONSULTATION
logo header splice of life rectangle 2

 

S2 Ep5: The Capture Gap: What Conference Prep and Portfolio Reviews Have in Common

By Matt Wilkinson

AI accelerates conference prep and TAM models, but capture depends on listening skills and pain point differentiation, not just data.

 

Shownotes

Most life science teams show up to conferences with product specs and TAM models built on market size. Both fail for the same reason: they prove something exists without proving you can win.

This episode is for product managers and marketers preparing for their next conference or portfolio review. Matt Wilkinson and Jasmine Gruia-Gray unpack data from the ELRIG Drug Discovery 2025 exhibitor survey and dissect why competitive revenue triangulation without capture strategy kills viable products.

Preparation without presence fails at conferences, and TAM without capture strategy fails in portfolio reviews.

What you will learn:

  • Why 86% of exhibitors lack battle cards and how AI collapses the preparation gap from days to minutes
  • The authenticity risk when AI prep becomes a script instead of a launchpad for listening
  • Why leading with TAM size gets you stumped by "why would anyone switch?" in portfolio reviews
  • How pain point differentiation prevents conservative TAM analysis from killing $60 million opportunities
  • The three executive objections every product manager must answer: unique pain solved, economic switching benefit, and beachhead proof

Keywords: life science marketing, conference preparation, battle cards, AI-assisted prep, TAM analysis, market sizing, pain point differentiation, beachhead strategy, portfolio review, ELRIG survey, competitive intelligence, customer discovery

Subscribe to Strivenn Thinking for weekly insights on AI-enabled life science marketing. Visit strivenn.com for frameworks, tools, and strategic resources.

Transcript

Matt Wilkinson and Jasmine Gruia-Gray discuss two critical gaps in life science commercialisation: conference preparation and TAM analysis. Drawing from ELRIG Drug Discovery 2025 survey data and real product portfolio examples, they explore how AI accelerates preparation while authenticity requires listening, and why market sizing without capture strategy kills viable products.

Conference Preparation Gaps and AI-Assisted Battle Cards

Jasmine [00:42]

Hey, Matt. Great. How are you?

Matt Wilkinson [00:44]

Hey Jasmine, how you doing?

Matt Wilkinson [00:48]

I'm great, thank you.

Matt Wilkinson [00:49]

Yeah, really excited to be covering these two stories today.

Jasmine [00:53]

Yeah, me as well. So let's kick it off.

Jasmine [00:56]

So on January 20th, we did a thing. That thing is we ran a webinar with Sanj Kumar, the CEO of Elrig, and unpacked what the data from our survey of exhibitors at Elrig Drug Discovery 2025, a festival of life science means and what teams can do about it before their next exhibit at a conference.

Jasmine [01:24]

The survey exposed a preparation gap that runs deeper than missing battle cards. Nearly half of the exhibitors default to the product-focused booth messaging, describing features and specs rather than outcomes and pain points. Meanwhile, the tactics that actually drive post-event action are expert consultations, 44%, and live demos at 30%, both of which require preparation that most teams haven't done. The webinar introduced the STAR or S-T-A-R framework for battle card creation, proposed AI assisted prep as a shortcut to a first draft, which I often do, I know you do as well, and made the case for putting a marketer back in the booth for a real-time intelligence gathering. The commercial stakes are straightforward, conference budgets are significant, and the ROI calculation depends entirely on what happens before and during the event, not on badge scans or footfalls. The preparation gap isn't a skills problem, it's a prioritization one. So Matt, where are you landing on this?

Matt Wilkinson [02:49]

So I, you know, when we look at the levels of AI adoption, and sort of some of these commercial challenges that we see across teams, I really feel that one of the biggest, you know, one of the biggest opportunities that that life science tools and life science, you know, service companies really have is they've got the ability to use AI to overcome that preparation gap and also to help them with the prioritization.

Matt Wilkinson [03:19]

That 86% figure of people not really having battle cards, it's a preparation access problem. It's not a capability problem at all. Most teams know they should have battle cards somewhere. They don't build them because the blank page problem is real. Competitive research is slow. Positioning takes consensus. And the deadline is always the travel day. And often people have got so many competing priorities that they're really only making sure that they turn up and they prep once they're there.

Matt Wilkinson [03:52]

You know, AI collapses the activation barrier for trying to get to a good enough answer from maybe days and hours of meetings down to just minutes. The prompt framework that we actually provide in the article, which can be found in our Striven Thinking section on the website, provides a prompt framework that produces a structurally sound first draft in a single pass. It's not perfect, it never will be, but the more, but the better the information that you provide, the better that it's going to be. But it really does give you a, an architecture that a marketing team wouldn't, you know, normally spend days assembling and you can get done in minutes. So I really feel that that makes the argument, you know, much, much stronger for adopting AI. How about you, Jasmine?

The Authenticity Challenge

Jasmine [04:44]

So I see it slightly differently. I worry that with adopting AI, you run the risk of losing the humane side of things, the authenticity side of things. AI is great at optimization. It's great for polishing. But that sort of real world lived experience, I worry will get diluted out with AI. In the webinar, you gave this great example when you were working on a project with Artel and you did this, the pipetting Olympics. I thought that was brilliant. It was a great competitor countermeasure, if you will. And it was purely human, purely authentic. It was a great opportunity to listen to what people were saying as they were going through the Olympics.

Matt Wilkinson [05:48]

Yeah, and I think that the challenge here, I understand where you're coming from with that concern about sort of diluting the authenticity and the humanness of our messaging. But I think that if we turn it around and say, hey, look, here's a lot of competitor information off of a website. If I'm going go into in and I'm unprepared, then I've got no idea how to, you know, our offers align with against somebody else's. But if I've gone in and I've done that research and I've got AI to help me to, to, to distill it down, then at the very least I know what I'm up against. And I think that then that allows you to have a much sounder foundation upon which you can then build that humaneness and really focus in on paying attention to the human that's in front of you.

Matt Wilkinson [06:45]

And then taking notes to understand how do we actually go to make sure that that that conversation wasn't just a nice conversation, but it was meaningful to both of us. And that meaning lasts beyond just the two or three minutes that we're at the booth together, but that actually allows us to continue the conversation afterwards.

Jasmine [07:05]

Yeah, I guess where I'm I'm concerned is the difference between preparation and using AI as a crutch and that's where you lose your authenticity. People stop listening to what the person in front of them is saying because they've prepared their battle card. It's all in front of them. It's ingrained in them. And they don't really take the time to probe a little deeper with the person in front of them. They just sort of go directly into that AI prepared spiel.

Matt Wilkinson [07:46]

That could be exactly the same as it's saying that going into a an exit X, you know, going into an exhibition and having been given scripts for the booth. This is what I have to say about this. Or what so many people do is to default to talking about the technical specifications that may or may not make their, you know, their product or service different. So I think it's, you know, it's a valid concern, but I think it's, that's, that focuses more on booth training and how to make sure that anybody in a commercial team, when they're dealing with customers is spending twice, at least twice as much time listening to their customers and the people they're speaking to as they are speaking.

Matt Wilkinson [08:29]

You know, after all, there is that famous saying that we were given two ears and one mouth for a good reason. And I think that holds especially true in any conversation that we're having with prospects and customers.

Jasmine [08:44]

Yeah, I think that that's fair that listening skills should be sharpened. I think that using the word clarification is super important to help you get a little bit deeper understanding on the pain points on what the person in front of you likes about the product they're already using or dislikes about the product they're already using to help them then connect the dots between what they heard and what preparation they did with AI. I think that's where it becomes a great hand in glove situation.

Matt Wilkinson [09:28]

And I think that that's something that we can agree on here is that we're using AI to, you know, to optimize to really to be able to get to the point where we've got a really strong foundation so that we can then spend time really focusing on what the customers are saying. And I don't think that we should be looking at AI as a band-aid that fixes every problem. I think we need to be looking at how do we make sure that our customers and our prospects get the most out of those interactions. And a lot of that is by giving them the respect of understanding that they have, you know, they have other options out there and it's our job not just to understand our portfolio, but to understand the context in which that portfolio sits. So that actually we might be able to recommend that a competitor's offer is better for a particular challenge. Now, hopefully our offer is better for across a whole range of things so that we have the obvious choice to go to. But we actually need to be upfront about saying this is actually, that's actually where, you know, competitor A is actually going to beat us. You you don't want to switch because of these things. And I think that's really important because there's nothing, you know, in this age of AI, there is nothing worse than bad reviews online. And so we really don't want to be pushing, you know, trying to sell people on false promises. I think that's one of the things that sort of AI search and the ability to build your own buying guides using AI is really going to bring to the forefront.

Jasmine [11:04]

Yeah, maybe we need to create a new word here, some mashup between authenticity and accelerator and acceleronicity or something. But the word I really like here is Launchpad. I think that if you consider AI for prep work as that launchpad into the conversation, you really can't go wrong. Preparation gap is real and absolutely AI can help to close it. But the authenticity gap opens that moment where the card becomes the script rather than the starting point. So it's that combination of using AI for prep, but then using your listening skills and the clarification language to get beyond that script.

Matt Wilkinson [12:00]

And I think that's a really nice way to to wrap that piece up.

TAM Analysis: Market Size vs Market Opportunity

Matt Wilkinson [12:04]

Next up, we have a an article that you wrote about on a total available market and how most teams are confusing market size with market opportunity. And this is something that's really interesting and probably a mistake I've made many times before realizing it. But, you know, in that article, you say that many companies present competitive revenue proving a, shall we say, a $50 million market exists. And then somebody will say, well, why would anyone switch to us? And all of a sudden your total available market suddenly means nothing. You know, if a competitor sold 2000 QPCR kits at 850 USD each, that would turn into a 1.7 million USD in real revenue. So we know the market exists. But if you target the same pain points with incremental improvements, you're fighting for a share and an established category. The team that identified a 15 million USD segment with acute unmet needs gets funded because they answer three questions. What pain do we solve that the competition ignores? What economic benefit justify switching and which beach head has pain acute enough to drive adoption?

Matt Wilkinson [13:30]

Executives fund differentiated value and defensible segments, not market size without capture strategy. So I'm really curious about this. Jasmine, why do most product managers walk into portfolio reviews with, know, TAM models that prove market size, but can't answer the capture question? And what does that tell us about how we're teaching market sizing?

Jasmine [13:56]

Yeah. I think this is a really important question. What I found happens most often is that we teach the product managers to build the town models using bottom up. No problem. Lab counts, publication trends, competitor revenue triangulation, all good. They get that really good. They get really good at those mechanics, filtering to relevant labs, tracking methodology, adoption, estimation, competitor unit sales, all that. Then they walk in to portfolio reviews and present the addressable market is let's say $50 million and growing at 12% annually. And they get stumped when the exec says, why will anyone switch from their current solution to ours? Fair question. And often the product manager hasn't prepared for that kind of debate. And the model isn't isn't created in such a way where they can point to a part of the model and say, well, here are the numbers that I factored in to address that question.

Jasmine [15:07]

The gap is that market sizing became a forecasting exercise. And I've done that personally myself, and I've seen lots of product managers do that instead of a capture strategy. What I mean is we're training product managers to prove the market exists not prove they can win meaningfully in the market and win meaningful share in those markets. That's ultimately what it's all about. And the fix really starts with reframing what TAM analysis is for. It's not predicting revenue. It's building the evidence base that answers three executive objections. What pain do we uniquely solve? What economic benefit justifies switching and which beachhead proves both. Competitive revenue shows the market is real, all good, but the pain point differentiation shows you can capture it. And that ends up being what's most important and what the executive wants to hear.

Matt Wilkinson [16:15]

From a practical perspective, are you able to walk us through how pain point differentiation prevents you know, the flow cytometry reagent problem, as you call it, where, you know, conservative TAM analysis kills a viable product that actually had a 60 million USD market?

Jasmine [16:34]

So the flow cytometry example with flow cytometry agents got killed because the product manager led with the conservative town, town. Let's say 20, $25 million. You know, if you find minute exact listening to that, to me that says not really worth it. Especially if you compare to other projects that were approved at the 50 million or higher town. The actual market turned out to be $60 million. The product was viable, but the analysis failed. The pain point differentiation prevents this. Instead of leading with the total market size, lead with the beachhead intensity. Translational research cores running a 20 plus samples a day face a bottleneck. Current reagents require four hours hands-on time. Hours reduces that to 45 minutes. Whoa, that's a significant differentiation, delivering $12,000 annual cost savings per lab. Now you're speaking money, but you're also speaking time. And those are things that core facilities do understand. You're not asking execs to bet on your forecast. You're showing the acute pain, quantified benefit and a problem competitors ignore. The TAM becomes supporting evidence. Our beachhead becomes the core message.

Jasmine [18:12]

And at 30% penetration for $8,000 at annual average sales price, that's a $4.3 million year one revenue. Adjacent segments expand to 25 million plus, but we're funding beachhead domination that proves expansion. So it's basically flipping the story on its head where you're starting with the differentiation, you're starting with the beachhead and the pain point and ending with the total available market.

Matt Wilkinson [18:45]

That's such an important shift. So in terms of sort of product managers, what shifts when they stop treating market sizing as a forecasting exercise and start treating it as an objection handling framework for portfolio review?

From Forecasting to Objection Handling

Jasmine [19:04]

Yeah, so here's what changes. Instead of building a TAM model projecting revenue three years out, you build a defense answering questions you know are coming. The old approach, for example, is based on lab counts and adoption curves. Will capture 15% share by year three, generating seven and a half million dollars. The exact asks, what if adoption is half that? You don't have an answer in this old scenario. In the new approach, you walk in with evidence for three objections. Why will customers switch? The proof research cores run 20 plus examples samples, lose four hours daily, we cut that to 45 minutes, delivering $12,000 annual savings, validated with eight sites. That's bringing your voice of the customer and actual ethnographic research right into the town discussion. Is the market real? Competitors sold 2,000 units at $850 each, proving $1.7 million revenue. Why this beachhead? These 180 labs have most acute pain. They become references for pharma. You're not predicting what will happen. You're proving what behavior you've observed through your voice of the customer and which segment responds strongest.

Matt Wilkinson [20:36]

That's really interesting. So what you're telling me is that market sizing isn't about predicting the future, but about building evidence that answers, you know, those three questions that executives always ask, what pain do you solve that competitors ignore? What economic benefit justifies the switch and which beachhead segment proves it? Curious to sort of hear how you'd summarize it.

Jasmine [21:00]

Yes, those are the three questions that every product manager needs to prepare for in this TAM discussion. The productive tension between competitive revenue triangulation and pain point differentiation never fully resolves. You need both. But when you walk into portfolio reviews, lead with that story that proves you can capture meaningful share in a focus segment, then expand from strength, right? That voice of customer strength is irreplaceable. And it comes back to what we were talking about at the beginning with conferences, those listening skills and the challenging skills and clarification skills are so important. And you can bring all of that back in the story when you talk about TAM. Start with one beachhead where customer pain is acute enough to drive adoption, build your TAM defense around that intensity, not the total market size.

Matt Wilkinson [22:05]

Well, thank you so much. I've certainly learned a lot about, you know, how to avoid that challenge in future and I hope our listeners have as well. Thank you for, thank you for a great article, Jasmine. Really enjoyed it.

Jasmine [22:17]

Thank you, Matt. Looking forward to our next discussion soon.

Matt Wilkinson [22:21]

Me too.

Q&A

We have a conference in three weeks and no battle cards. Can AI actually help us prepare in time?

Yes, but only if you treat AI as a launchpad, not a finish line. Use the STAR framework prompt to generate first drafts in minutes, covering situation, talking points, anticipated objections, and references. Validate the output against real customer conversations you've had. Schedule two 90-minute sessions: one to generate and refine cards, one to role-play booth conversations focusing on listening and clarification skills. AI collapses preparation time from days to hours, but authenticity still requires human practice.

How do I prevent my team from just reading AI-generated battle cards like scripts at our booth?

Train booth staff to use the 2:1 listening rule - spend twice as much time listening as speaking. Open every conversation with a clarification question: "What brought you to our booth today?" or "What challenge are you trying to solve?" Use the battle card to inform your responses, not script them. Role-play scenarios where the prospect's pain doesn't match your prepared content. Debrief daily on what you heard versus what you expected. The card provides structure; your presence provides value.

Our last TAM analysis led with $50 million market size and got rejected. How do I reframe for the next portfolio review?

Flip your story structure. Start with beachhead intensity, not total market. Identify the segment with most acute pain - translational cores, specific therapy areas, high-throughput labs. Quantify their bottleneck in time and cost. Show how your solution delivers measurable economic benefit validated with real sites. Present competitive revenue as proof the market exists, then explain why your differentiated pain point lets you capture meaningful share. Lead with "we solve this problem competitors ignore" before discussing market size.

We conducted voice of customer research but our TAM model doesn't include it. How do we integrate those insights?

Build your TAM defense around three executive objections using VOC as evidence. For "why will customers switch," cite specific pain points from your research with quantified impact. For "is the market real," combine competitive revenue with adoption validation from customer interviews. For "why this beachhead," reference the segment where pain intensity was highest in your research. Transform VOC from background research into your primary proof that you can capture share in focused segments. Eight validated customer conversations outweigh elaborate forecasting models.

How do we balance AI efficiency with maintaining authentic customer relationships at conferences?

Use AI for pre-event intelligence gathering and post-event follow-up, but prioritize human presence during conversations. Pre-conference: AI generates competitive intel, battle cards, and booth training scenarios. At the booth: focus entirely on listening, asking clarifying questions, and capturing unique insights. Post-event: AI helps synthesize notes and personalize follow-up. The ROI comes from relationships built through authentic listening, supported by AI-accelerated preparation and execution. Treat AI as infrastructure that frees you to be more human, not less.

Topic: