S2 Ep9: Are You Misclassifying What Matters?
By Matt Wilkinson
AI discoverability and product line extensions share the same root problem: misclassification kills outcomes before execution even begins.
Shownotes
Most life science companies are optimising the wrong thing - and paying for it twice. In this episode, Matt and Jasmine expose two pressure patterns that product managers and marketers recognise immediately but rarely diagnose correctly: AI visibility that depends on brand signals most companies don't have, and line extension labels applied for political convenience rather than classification accuracy.
This episode is for life science marketers and product managers navigating AI-driven discovery, stage gate processes, and the uncomfortable conversations that live upstream of both.
Key idea: Whether you are building AI discoverability or launching a new product, the classification decision is the fault line - not the execution that follows from it.
What you will learn
- Why brand search volume, not content quality, is the strongest predictor of AI citation
- How entity consistency across platforms unlocks AI visibility for challenger brands
- Why the line extension label is often a timeline tool rather than a classification decision
- How to apply a three-dimension framework (who, how, what) before stage gate classification locks in
- What product managers can do when the classification fight is already lost at the gate
- Why continuous improvement of the process itself matters as much as the products moving through it
Chapters
[00:42] Introduction
[01:14] AI citation: why brand search volume beats content quality
[05:25] The entity consistency fix most companies haven't done
[07:38] Agents, content operating systems, and the compounding content programme
[09:20] Brand override and word of mouth in an AI-mediated world
[10:25] The line extension trap - who pays when the label is wrong
[12:43] The triad framework: who, how, and what
[15:03] Making the deployment gap concrete enough to fund
[17:29] What to do when leadership won't listen - yet
[20:41] Is this a skills problem or a structural failure?
[22:06] Continuous improvement of the process, not just the product
[23:10] Close
Keywords: AI discoverability, AEO life science, entity consistency, brand search volume, line extension misclassification, stage gate, product manager, life science marketing, GEO content, AI citation
Transcript
In this episode of A Splice of Life Science Marketing, Matt Wilkinson and Jasmine Gruia-Gray work through two connected ideas: first, how AI discoverability actually works and why most life science companies are building on the wrong foundations; second, how the line extension label - applied for speed rather than accuracy - creates downstream costs that fall squarely on the product manager. Both arguments converge on the same uncomfortable insight: the classification decision upstream is where outcomes are decided, long before execution begins.
AI Discoverability and Entity Consistency
Speaker: Jasmine [00:45]
So I'm really excited to cover our two topics today. One is about your SEO rankings and are they irrelevant if AI has never heard of you? And we'll pull on that thread as it applies to line extensions, which is a blog I had written about. Maybe to kick it off, start with the SEO and AEO side of things, okay?
Speaker: Jasmine [01:16]
So a growing body of research into how large language models select sources has revealed a structural break from traditional search logic. The strongest predictor of AI citation is not backlink count, domain authority, or content quality. It's brand search volume - how frequently people look for a company by name. For companies that have spent years building search authority through content programmes and link acquisition, this is a disorienting finding. The assets they invested in may not transfer. Around 60% of ChatGPT queries are answered from training data alone without any live retrieval, meaning content published this quarter is simply invisible for those queries. What matters instead is whether the model formed a coherent picture of a company during training, which requires consistent entity representation across platforms. A mismatch between how a brand describes itself on its website versus LinkedIn versus Wikidata reads to the model as ambiguity - and ambiguity suppresses citation confidence. Only 11% of domains are cited by both ChatGPT and Perplexity. Visibility on one platform doesn't imply visibility on others. The GEO research from Princeton suggests structured, statistics-led content optimised for extractability can improve AI visibility by 30 to 40%. But the article's core argument is that structural prerequisites - entity consistency and brand recognition - have to come first. Without those foundations, content optimisation is cosmetic. So is brand search the key metric here, Matt?
Speaker: Matt Wilkinson [04:00]
Well, I don't think it's just a metric. I think it's actually a feedback loop with a closing window. The brands already being searched by name get cited by AI, which increases their visibility, which increases their brand searches. One of the things that's really interesting is that many of the bots going out there doing AI searches actually are dual purpose - when they're scraping the data from a website to answer a search query, they're also bringing that data back into the training data for the next model. So every time you're found, more often than not, you'll bring that data into the next training run. That's really interesting for life science tools companies that are well known inside specific research communities but invisible outside of them, because this isn't just a content problem. It's a structural position problem. You can't write your way out of not being known. The uncomfortable implication is that the companies best positioned for AI discoverability right now are the ones that have already dominated traditional awareness - the Thermo Fishers and Abcams of the world. Challenger brands and specialists that built genuine domain expertise but narrow recognition are entering a game where the scoring system favours whoever was already winning.
Speaker: Jasmine [05:25]
Yeah, so every new channel has a compounding advantage for early movers - but that's not a reason to stay out. It's a reason to move now while entry cost is low and competitors haven't started. The most interesting question is what actually unlocks the loop for a company that isn't already a household name. The research is pretty clear. It's not more content. It's entity consistency - the unglamorous work of making sure your company is described the same way across all channels: your website, LinkedIn, Crunchbase, Wikidata, and so on. That's a one-time audit and fix, and most life science companies haven't done it. That's actually the lever where most companies should be starting.
Speaker: Matt Wilkinson [06:26]
Absolutely. One of the challenges is that as companies grow, people lose access to accounts. All of a sudden you may not have access to your Crunchbase account. Marketing may no longer have track of all the accounts describing what the brand is and what they do. And as brands evolve and get rebranded, that becomes a bigger and bigger problem. The harder truth is that most life science marketers don't even know that Wikidata exists - that they need to be present there, that what Crunchbase says about them needs to match Wikidata, which needs to match LinkedIn, which needs to match their website, which needs to match wherever their brand is being mentioned. Being known for one crucial thing is really, really important. If you're not present consistently, any optimisation you do is built on top of a broken foundation.
Speaker: Jasmine [07:38]
But presumably you can create an agent that can help you do that.
Speaker: Matt Wilkinson [07:43]
You can create agents to go out and help you find all the mentions of your brand that you might have control of - if the agent can actually access those sites. We know that a lot of sites get blocked for robots.txt or Cloudflare settings one way or another. But yes, entity yourself once. And then there is the ongoing content optimisation and content competition. The brands that win AI citation long-term aren't just findable, they're repeatable and extractable. It's about having stuff that people want to talk about time and time again - that becomes part of a content operating system. The research does show that 30 to 40% visibility can be gained from structured GEO content. The entity audit closes the gap. The content programme widens it.
Speaker: Jasmine [08:34]
The tension that doesn't resolve with entity consistency alone is that it's the work that generates no short-term signal and is nearly impossible to attribute. The CFO conversation I might have to have is: we are doing invisible infrastructure work today so that our content programme generates compounding returns in 18 months. That's a hard pitch, especially with a CFO. But the brands that can make it and act on it are the ones that will own the shortlist when buyers stop Googling and start asking AI.
Speaker: Matt Wilkinson [09:20]
Absolutely. And there's a really important thing we haven't yet mentioned - the brand override effect, where if you do a great job and your customers love you, people will actually want to feature you in those searches. There's a lot of new technical work that probably was part of good SEO and digital hygiene programmes in the past. But what it really does now is it behoves every marketer to make sure that brand experience and brand word of mouth marketing is top of mind. That's going to be a crucial play going forward.
Speaker: Jasmine [10:12]
Couldn't agree with you more. It's pulling on that consistency and that consistent customer experience that will be wildly helpful wherever the customer sees your brand.
The Line Extension Trap
Speaker: Matt Wilkinson [10:25]
Yeah, absolutely. And I think that's a good place to move on to the article you wrote this week, which was about how a line extension might just cost you six months of progress. What I found really interesting about this is that you surfaced a pressure pattern most life science product managers recognise immediately - leadership classifying products as line extensions to compress timelines, and then leaving the product manager to absorb the consequences when field conditions contradict the label. The data point anchoring the argument is damaging: over half of product managers in a recent set of conversations reported this exact pressure. The downstream cost is quantified - four to eight months of unplanned field time, hundreds of thousands in emergency validation work, two to three quarters of delayed revenue. You proposed a three-dimension framework assessing who is using the product, how they are integrating it, and what job they're hiring it to do. If any dimension shows high-risk variation from the parent product, the line extension label is wrong and alpha and beta testing needs to happen. The sharpest edge of the blog is its positioning of the classification decision itself as the fault line - not the testing methodology that follows from it.
Speaker: Jasmine [12:43]
The triad framework of who, how, and what really is where to start - and bringing that three-dimensional framework upstream in early product development discussions, not waiting until it's almost too late. The moment the line extension label is used to answer a timeline problem rather than a classification question, it's crossed from a planning tool to a political shortcut. The product manager can hold that line if they have a diagnostic that makes the distinction legible before the stage gate.
Speaker: Matt Wilkinson [13:32]
I think it's really interesting because we're looking at what's a line extension and what's an incremental line extension. One of my favourite ideas from my MBA was whether the iPhone itself was a line extension of the iPod Touch - it added greater levels of connectivity and then functionality over time. In many ways it was a line extension, but also a brand new category creation. So the real challenge here is: who's defining this? Is there a process within the organisation to properly define what those are, so everybody's working from the same starting point?
Speaker: Jasmine [14:26]
I think my triad framework was the starting of a process, but it obviously needs to be adapted depending on what the product portfolio looks like. The important part is to be able to ask these questions very early on and not wait until time is being challenged and you really don't have the opportunity to answer them.
Speaker: Matt Wilkinson [14:56]
So can a product manager actually win the classification fight at the stage gate, or does it really need to happen upstream?
Speaker: Jasmine [15:03]
The pre-gate move is the right one. Run the three-dimensional assessment before the classification locks in - presented with quantified consequences attached. Make the deployment gap concrete enough that leadership can't miss it. For example: if our assumption about pharma QC documentation requirements is wrong, we'll burn six figures. Two alpha sites in 60 days for $15,000 closes that gap. That's the quantifiable perspective that will handle objections from leadership. This is not a technical argument - it's a business case. Most executives will engage with that framing if the numbers are credible and the ask is scoped.
Speaker: Matt Wilkinson [16:19]
I think that's a really interesting way of looking at it. But I've witnessed many organisations where the stage gate is already pretty much decided by leadership before these conversations can even start. By the time the product is on the agenda, the commercial team has already built the launch timeline around the line extension label. Finance has already started modelling the resource ask accordingly. And the VP has already told someone above them when the product is shipping. A product manager arriving at that meeting with a three-dimensional framework is technically correct but organisationally late. This basically has to happen before those conversations happen. And it turns product managers into needing some pretty significant political chops to get anything done internally here.
Speaker: Jasmine [17:29]
Yeah, what I've also witnessed is that sometimes you have to let this mistake play out. Let the label go through, then really monitor the consequences after launch - how much additional field time is being put into training customers, how much additional sales time is going into persuasion - and pull that debrief together as part of the post-launch stage. Bring in executives not to wag your finger, but to present the evidence of what could have been avoided. Then use that to get permission to modify the process with those learnings.
Speaker: Matt Wilkinson [18:37]
Yeah, it's interesting, because if those concerns aren't listened to, it's still the product manager that absorbs the damage - both the political damage for any mistakes made, and the expectation that it was their job to fight harder, to be more persuasive. That can really damage careers. But it also feels like the person closest to predicting the problem is then the one saying "I saw this coming, I warned you about it, and now I'm the one having to fix it." Is this a competency problem or a structural failure? I can see it being both.
Speaker: Jasmine [19:28]
It's a fine line. The data point that more than half of product managers report this pressure doesn't mean the situation is unfixable at the individual level - it means the skill is under-supported. Product managers who know how to make deployment gaps concrete, who enter stage gate meetings with quantified consequence attached to their ask, and who understand how to scope a targeted alpha rather than demanding a full new product development timeline, win this fight more than those who raise the concern as a general caution. The framework is the difference between "I think we should test more" and "here are the specific things we don't know, here's what happens if our assumptions go wrong, and here's the minimum intervention that closes the gap." The second version is fundable. Teaching product managers to make that argument - and having their managers support them - is the actual leverage point.
Speaker: Matt Wilkinson [20:41]
Yeah, that's really interesting. As you said, those with greater political chops navigate these problems better. But if more than half of product managers are experiencing this, there seems to be a broken process as well as a potential skills gap. That feels like a real opportunity for a lot of organisations to ask: are we training our product managers? And what's the process for making sure that the people closest to the problems are listened to? This really comes down to an organisational challenge that leadership needs to pay attention to - certainly if these things are happening time and time again. The implication for product managers is uncomfortable: the most important conversation may be the one with your VP about whether the gate process itself needs to change. And can you convince them that it does? That requires critiquing the system you operate inside. But that's where the leverage really is.
Speaker: Jasmine [22:06]
Yeah, absolutely. Having a continuous improvement mindset is super important along the whole chain of command.
Speaker: Matt Wilkinson [22:18]
And it sounds like it's not just continuous improvement of the products going through the process, but of the process itself. And I think that's sometimes where organisations struggle - "this is the way we do things, this is the process we follow." When the process is found to be unsuitable, maybe it's just the fault of the person putting the thing through the process, rather than looking at whether the process itself needs to change.
Speaker: Jasmine [22:43]
It's an easy scapegoat to blame the product manager or others involved in the project. The more mature view is really to dig deeper into the process, understand the trade-offs you're making, and understand the changes in market conditions that are driving the need for continuous improvement of the process.
Speaker: Matt Wilkinson [23:10]
It was a fun conversation and thank you for digging into this even more - I've certainly learned a lot. Thank you.
Speaker: Jasmine [23:16]
Thank you as well, Matt. I learned a lot more about AEO and the need to add on to the SEO process that many people are doing, and the importance of that word consistency. Thank you as well.
Speaker: Matt Wilkinson [23:34]
Thank you to everybody that's listened in thus far. Look forward to hearing you on the next one.
Q&A
Our AI discoverability article says entity consistency is the first fix - but where do we actually start?
Pick one person and give them one afternoon. Run a search for your company name across your website, LinkedIn, Crunchbase, and Wikidata. Write down exactly how your company is described in each place - category, what you do, who you serve. Any mismatch is ambiguity the model reads as uncertainty. Fix the descriptions to match before you publish another word of content. That's the audit. It costs nothing and most companies haven't done it.
We publish a lot of content. Why isn't that translating into AI visibility?
Because around 60% of AI queries are answered from training data, not live retrieval. Content published this quarter is invisible for those queries. What matters is whether the model formed a coherent picture of your brand during training. If your entity signals are inconsistent across platforms, the model treats your brand as ambiguous and suppresses citation confidence. Content volume is irrelevant until the foundation is solid. Fix entity consistency first, then let content compound on top of it.
How do I make the business case for entity consistency work when there's no direct revenue attribution?
Frame it as infrastructure with a quantified risk, not a marketing initiative. The argument is: AI referral traffic converts at higher rates than organic. We are invisible to AI right now. One person, one afternoon, closes the structural gap that no amount of content spend can fix. Compare the cost of the audit to the cost of being absent from AI-generated shortlists in 18 months. Most CFOs will fund a one-time fix framed that way.
As a product manager, how do I push back on a line extension label when leadership has already committed to it?
Stop framing it as a caution and start framing it as a business case. Quantify the gap: what specific deployment variables are unvalidated, what happens if those assumptions are wrong, and what is the minimum intervention that closes the gap - for example, two alpha sites in 60 days for a defined budget. That version is fundable. A general concern about testing is not. Arrive at the stage gate with numbers attached to your ask, not just a flag.
What if the classification decision has already locked in and I couldn't stop it - what now?
Monitor aggressively post-launch. Track field time spent on customer training, sales time spent on objection handling, and any emergency validation costs. Quantify those against the original timeline saving. Then bring that evidence to leadership not as a complaint but as process improvement data - with a specific ask to modify the stage gate before the next product goes through it. That's the version of this conversation that gets heard.