S2 Ep10: Are You Invisible to AI? GEO, Soft Launches, and the Signals That Actually Matter
By Matt Wilkinson
Life science marketers have a structural AI citation advantage they're not using - here's how to capture it.
Shownotes
Your company's scientific content is exactly what AI models want to cite - and you're hiding it behind a gate. In this episode, Matt and Jasmine unpack two uncomfortable commercial realities: why life science brands are invisible in AI search despite decades of peer-reviewed credibility, and why most soft launches are risk management exercises dressed up as market learning.
This episode is for marketers, product managers, and commercial leaders in life science, diagnostics, and lab tools who need to know how AI discoverability is reshaping the citation landscape - and why the way you design a soft launch determines whether your sales team inherits signal or ambiguity.
Key idea: Life science companies have a structural AI citation advantage that they are not capturing - because the content is gated, unattributed, or published in formats AI cannot parse.
What you will learn:
- Why NIH and Science Direct dominate AI citations in health and science domains - and what that means for your content strategy
- The three structural problems compounding AI invisibility: gated content, ghost writing, and channel mismatch
- Why the quarterly discoverability test is deceptively simple - and why that is a feature, not a flaw
- How field application scientists can close the quality gap in your GEO execution
- Why the soft launch is not a risk management tool - and what happens when you treat it as one
- How to define exit criteria before the soft launch begins and why that single decision determines everything
Chapters:
[00:42] Introduction
[00:49] Jasmine sets up the GEO article - AI citation compression in life science
[04:03] Who owns the AI visibility problem?
[05:50] Why it is not just a marketing problem - product development and GEO
[07:37] How marketing can elevate scientific content for bots and humans
[09:00] Is the gated content trade-off as clear as it seems?
[10:50] Execution concerns: who writes the structured summaries?
[12:22] The field application scientist as a GEO teammate
[13:38] Does ghost-written content hurt discoverability?
[14:34] The politics of named authors in scientific organisations
[16:51] Is a quarterly discoverability test enough?
[18:04] How to tell which assets are driving your AI visibility
[19:13] Using AI to interrogate its own pattern recognition
[20:55] Matt introduces Jasmine's soft launch article
[22:56] What a soft launch should actually be measuring
[24:05] The political problem - product managers without organisational authority
[25:30] Early adopters versus early customers - Rogers' hard line
[27:21] Exit criteria and the single named decision maker
[29:04] When to skip the soft launch entirely
[30:24] Matt's synthesis: a soft launch is a named question, not a harbour
Keywords: life science marketing, AI discoverability, GEO, generative engine optimisation, AI citation, soft launch, product launch strategy, life science commercialisation, scientific content marketing, AI search, citation compression, Rogers diffusion of innovations
Transcript
This episode covers two connected topics: the structural AI citation advantage life science companies are leaving uncaptured, and why the soft launch is failing the commercial teams it was designed to protect. Matt and Jasmine debate both with evidence, pushback, and practical moves you can take this quarter.
Part 1: AI Discoverability and Generative Engine Optimisation
Speaker: Jasmine [00:49]
Hey, Matt.
Speaker: Matt Wilkinson [00:44]
Hey, Jasmine.
Speaker: Jasmine [00:46]
How are things going?
Speaker: Matt Wilkinson [00:47]
Good, thank you. How you doing?
Speaker: Jasmine [00:49]
Yeah, busy as you are - lots of good persona work and other projects on tap. But why don't we get started with the interesting article that you've written, where the central argument is one most life science marketing teams will be pleased to hear.
The content your scientists have been producing for years - peer-reviewed publications, application notes with named authors and specific experimental results, white papers with real methodology - it's exactly what AI models prefer to cite in health and science domains. The big problem is most companies are either hiding it behind a gate - and this is where we should be finger wagging at them - publishing it without attribution, or simply not connecting those assets to their AI discoverability strategy.
The citation landscape is structurally different from consumer search. An analysis of 36 million AI overviews shows NIH content accounting for approximately 39 per cent of citations in health and science domains. Science Direct approximately 11.5 per cent, and established clinical organisations the balance. Social platforms barely register. Reddit threads and LinkedIn posts - the channels a lot of life science brand managers are optimising for right now - are essentially invisible in AI citation pools for scientific queries.
Citation compression is the mechanism that makes this urgent. Research consistently shows that only five brands appear in approximately 80 per cent of AI responses per B2B category. The dynamic is more binary than traditional search. A search engine had 10 positions on the first page. AI has a recommendation list. If you're not on it, you're not in consideration.
Your post identifies three structural problems compounding the visibility gap: gated content that AI can't see or cite; ghost-written or unattributed content that AI can't anchor to a verifiable expert; and a fundamental mismatch between where marketing teams are investing attention and where AI citation weight actually accumulates.
The resolution is not a content rebuild. It is three moves: publish structured summaries ungated for the top five gated assets, build entity profiles for the three most credible named scientists, and run a quarterly discoverability test across the major AI platforms.
Who actually owns the AI visibility problem inside a life sciences company?
Speaker: Matt Wilkinson [04:03]
Historically it's going to be a marketing challenge, and so marketing is likely to own it - and that's who probably should own it. It's not necessarily about marketing creating all of the content. It's really a challenge about how we make sure it's discoverable. We need to feed the bots with the sort of information they want. And when it comes to deeply scientific or health information, what they really want is peer-reviewed journals.
It's no surprise they're already connected to PubMed. If you're looking to do deep research, you can connect directly from an AI search engine - ChatGPT or Claude - into PubMed and conduct those deep searches using MCP connectors. It's really important that we're doing what many life sciences companies have done for a while, which is creating those reference lists - the publications lists where people have listed all the pubs that have been cited - but then going a step further. What's in those publication lists? What's been cited? Making sure we're building a web of rich information about what's available and really going to that stage of trying to make sure the AIs can be fed as much of that information as possible.
The urgency is real. Much of what surfaces in AI search is made up by what's in the training data already. So making sure we're getting our information into the next round of training data is critical.
Speaker: Jasmine [05:50]
I want to expand on that perspective. I think it's the wider marketing function that owns this, including product managers and product marketers. But where I want to push is on the framing that it's primarily a marketing problem with a marketing solution.
The content assets that drive AI citation weight in health and science domains - peer-reviewed publications, application notes with specific experimental results, white papers - are not created by marketing. They're created during product development, through relationships with key opinion leaders. And if those assets aren't being structured for AI citability from the moment of development, a structured summary is just going to be a patch on a problem that compounds over time.
Citation compression means only five brands appear in approximately 80 per cent of AI responses per category. A structured summary of a three-year-old gated application note is better than nothing. It's not a competitive response to a company that has been publishing ungated, attributed, schema-marked content from day one of each product launch. Marketing and product managers certainly own the intersection of product development and commercial output. But if GEO is treated as a marketing retrofit, the marketer's invisibility in the process is going to be a persistent problem.
Speaker: Matt Wilkinson [07:37]
I think that's right. You've got to have an owner internally, and that does naturally sit with marketing because we've got to look at this as a big picture. If I'm looking to solve the scientific domain problem, that's where today's conversation really comes in. But there will be other things around brand where marketing is going to be really important - the website and the journey we need to architect for both humans and for the bots, to make sure we're creating a set of data that the bots can see everything we do and make sense of quickly and easily.
Marketing has to own that piece. But you're very unlikely to be going into the laboratory, running experiments, and writing scientific papers and white papers. Marketing might have input into saying we really need white papers on these topics. But those outputs are owned by the research scientists - and that shouldn't change.
What's really important is that we're aware this is a valuable channel for being found. And those are the things that are now being found, perhaps more so than in the past. One of the things we can really do well as marketers is to see these scientific papers and ask: what can we do with this to elevate the content on the website, both for humans and for the bot, to make it as easily discoverable and accessible as possible.
Part 2: Gated Content, Attribution, and the Discoverability Test
Speaker: Jasmine [09:00]
That brings me to the next question. Is the gated content trade-off actually as clear as your post suggests?
Speaker: Matt Wilkinson [09:09]
I think it is. We know that over time people have become less and less inclined to give up their email addresses to access PDFs of things we want them to have. That lead magnet conversation as marketers call it has become less and less valid.
What we need to be doing is looking at how we create assets that are machine readable, human readable, and shareable. Some documents are still going to be served as PDFs - there's no question about that. But that doesn't mean we shouldn't also be publishing them and making sure they're available on the web. And once we make them available on the web in HTML format, we also need to make sure they're easily machine readable - having the right schema markup so it's really easy for both humans and the AI bots to access the data via the web, and also to be able to download and share it as a PDF. With the increased desire for people to create their own buyer's guides, having that information available and machine readable on the web is increasingly important.
Speaker: Jasmine [10:50]
I agree all of that is important. But where I have concern is around the execution. Auditing gated content and writing structured summaries for the top five assets in one month - I think that's optimistic for most marketing organisations, especially where marketers don't have a scientific background. What would suffer in that case is quality.
Speaker: Matt Wilkinson [11:27]
I'd hate to think it would take that long. I realise how inefficient many organisations can be. But in this day and age, with the AI tools we have available to us, it really shouldn't take that long to get to a really good first draft. Once we've gone through a couple of loops and established the process, the AI should be able to take that data and put it into a first draft in the format we want to export it as on the website - both in HTML and schema markup to make it easy for the AIs to find.
I hear you, and I think many organisations will struggle. But those same organisations that struggle to deliver something like that will likely continue to struggle in many other facets. And that gap will be exposed as other organisations start to move quicker using AI.
Speaker: Jasmine [12:22]
There's also a human factor here. In addition to leveraging AI as a teammate, I think there's another teammate that could be super helpful - the field application scientist. They would have more scientific background, and they may very likely be working with the key opinion leader and some of the other scientists the summary is based on. They would be a partner in helping make sure the content meets the quality bar you need.
Speaker: Matt Wilkinson [13:01]
Absolutely. There is no point putting up something that's wrong and having the AI presenting it as fact when it's actually an error on the site. Field application scientists form a vital part of that connection - and often one of the most well-loved parts of an organisation. The FAS builds really great relationships with customers to the point where they are the most valuable commercial connection. It's not sales, it's not customer service - it is the person who really speaks to customers in their own language. That's absolutely a valid point.
Speaker: Jasmine [13:38]
So does ghost-written and unattributed content actually hurt a company and hurt the brand? Or could that be a little overstated?
Speaker: Matt Wilkinson [13:50]
When it comes to specific scientific credibility, having named authors does appear to make a big difference. We really want to be able to build our scientists as personal brands within the organisation. Their scientific personal brands add a lot of credibility.
Does it matter that it's ghost-written? Not really - more important is that we need to attribute it. We know that having names and brands in close proximity to topics and content is really important. Ghost writing itself is less the issue. Attribution is the issue.
Speaker: Jasmine [14:34]
Part of the angle here is that it can be a politically loaded conversation inside a lot of institutions as well as biotech companies, where scientists can be fearful of being considered a shill for the company. A senior scientist can be cautious about associating their professional reputation with what they perceive as marketing content. They have a publication record and peer relationships that took years to build. Being named as the author of an application note that a marketing team had significant input on creates a potential credibility question they may not want to answer.
Speaker: Matt Wilkinson [15:28]
That's valid. I think we all have to be really focused on our personal brands in the age of AI - it's one of the things we have to protect above all else. I would never advise people to put their names on things they weren't proud of, things that wouldn't hold up to scrutiny. And we shouldn't be putting out materials we don't feel confident in.
But if it is a white paper, if it really is an application note that's truly interesting, then a scientist will need to have been involved in it. This isn't about shilling. These are supposed to be helpful scientific pieces of content. There is, in my mind, a world of difference between those kinds of pieces and something that is a slick marketing piece. As long as we're framing those things correctly and looking through the right lens, we need to make sure we're shining a light on the great science going on in organisations.
Speaker: Jasmine [16:35]
Staying as true as possible to that scientific content will not only make the author feel more comfortable, but I think it will resonate more strongly with the prospects you're trying to reach with that content.
Speaker: Matt Wilkinson [16:50]
Absolutely.
Speaker: Jasmine [16:51]
So is the quarterly discoverability test actually enough? Or is this a measurement problem in disguise?
Speaker: Matt Wilkinson [17:00]
I think it's probably a bit of both. The thing with a test like this is it's deceptively simple. And I want to defend that against the charge of not being rigorous enough - because things take time to work through the system. If we can put something in place that allows us to have a check-in on a quarterly basis, it becomes part of a reporting cadence. It becomes part of the marketing operating system.
That's really important as we go forward - not just thinking, "we've done the GEO project now, we're done." We know that AI search has a recency bias, more so than in traditional search engines. We need to be playing that game consistently and keeping it top of mind. If we drop off because a competitor has all of a sudden come in and started making a lot of noise about a topic, that's where we've got to be careful.
Speaker: Jasmine [18:04]
I want to pull on that thread about a competitor having more recent information. How do you actually tell which of your assets are driving the discoverability?
Speaker: Matt Wilkinson [18:21]
Sometimes the AIs will be able to tell you when you're running these tests. You'll start seeing the links and the references. You can specifically ask for references - Perplexity has that built in. Other engines do it as well. I think it's really important to look at what's driving that traffic.
When you're doing those searches you'll start to see the picture. And you can then go dig in and ask the AI itself: why were you showing me this? Why has this changed? It may or may not give you the right answer. But it'll give you a sense of what's happened over time and why it's telling you what it's telling you now. That is far easier than trying to look into the black box of Google search - where we don't know what keywords we're being found for, we just know that we were being found.
Speaker: Jasmine [19:13]
It's the power of AI's pattern recognition. Ask it, play devil's advocate with it - what if I did this instead of that? Would my discoverability improve?
Speaker: Matt Wilkinson [19:26]
Yeah, and that's critical.
Speaker: Jasmine [19:30]
So I think at the end of the day, what we can agree on is that life science companies have a structural AI citation advantage that few other industries share. Decades of peer-reviewed output, named scientists with verifiable credentials, experimental data with specific quantitative results. The platforms AI models prefer in health and science domains are the platforms scientists have been contributing to for years.
That advantage is real - and it's not being captured, not because the content doesn't exist, but because it's gated, unattributed, or published in formats that AI models can't parse or verify. The window to act on this is open. But citation compression means it's narrowing. Five brands per category is not a future state. It's the current operating condition.
If you're a marketer or a marketing leader listening to this and you've not run the discoverability test, that's the first move we both encourage you to take. Open an AI platform, ask what you would ask if you were a researcher evaluating suppliers in your category, and find out whether you're in the conversation before you assume you are.
Part 3: Soft Launches - Signal Collection or Risk Management?
Speaker: Matt Wilkinson [20:55]
That brings us nicely onto your post, Jasmine, where you've been talking about soft launches and what they're actually measuring. Your central argument - which is probably uncomfortable for new product development teams - is that the soft launch is not a risk management instrument. It is a signal collection instrument. And most life science companies are running it as the first thing while calling it the second.
Your argument is that the failure starts before the soft launch ever begins. If you ask commercial, R and D, and regulatory what the soft launch is for, you get three different answers. When it means everything to everyone, it becomes a container for unresolved cross-functional tension. Exit criteria get renegotiated mid-flight. Timelines drift. And the product manager absorbs the delay.
The sharper failure is cohort design. Soft launch cohorts should be paying customers at or near list price, recruited specifically for their ability to evaluate on scientific merit and influence downstream purchasing decisions. Rogers' diffusion of innovations theory draws a hard line between early adopters and early customers. Early adopters buy without peer precedent and have the organisational credibility to unlock the next wave of commercial adoption. Early customers buy because the product is on a preferred vendor list. These two groups will give you completely different signals. And if you've discounted your way into your cohort, you will never know which one you're actually listening to.
That brings up a really uncomfortable subtext. In many life science organisations, the soft launch is not primarily a market learning exercise. It is a risk distribution mechanism. Nobody wants to be the person who greenlit the product that failed publicly. The soft launch extends the period during which accountability is shared - and therefore the PM holds the timeline while every functional stakeholder waits for someone else to call it.
This was really interesting to me. What is the soft launch actually measuring?
Speaker: Jasmine [22:56]
It should be measuring commercial readiness. And the three questions it needs to answer are: can your FAS team operate without help from product management or R and D; do customers reorder at list price without a relationship discount; and does your positioning language match the words customers use unprompted when they describe the problem to a colleague.
If you're not measuring those three things, you're measuring sentiment. And sentiment doesn't tell you whether your commercial motion can scale. Social approval is a relationship asset - it's a good one, it's an important one - but it's not a launch signal. The product manager and the marketer who can't tell the difference is going to walk into full commercial release with a pipeline built on goodwill and a pricing precedent that the sales team will spend a year walking back.
Speaker: Matt Wilkinson [24:05]
I agree with the framework. I guess the challenge I have is that a number of the organisations I've worked with - those product managers don't have the organisational authority to run this. How do we help them overcome that political problem? Because it feels like there's a bit of dysfunction there. How do we help people overcome that?
Speaker: Jasmine [24:25]
I think it's about socialising the real impact of a soft launch with the commercial team. Because frankly, what sales leader wouldn't buy in to testing the pricing early? What FAS leader wouldn't buy in to testing whether their team is actually ready - at a smaller scale rather than spending a lot of time and hard-earned reputation to fail in front of a customer? It's all about socialising upwards in the organisation during the product development process so it doesn't come as a surprise when you want to run a soft launch.
Speaker: Matt Wilkinson [25:18]
That's really interesting. One of the things you said made me realise that we have to be framing the cohort from which we're collecting this data really carefully. Who belongs in that cohort? And what happens when you get it wrong?
Speaker: Jasmine [25:30]
This is where Rogers draws a hard line between early adopters and early customers. And that line matters more in soft launch design than most product managers realise. Early adopters evaluate on scientific merit. They buy without requiring peer precedent and have the organisational credibility to influence the early majority - who won't move without it.
There's a very specific persona for that early adopter versus the early customer who buys because the product is on a preferred vendor list, procurement approved it, and the price was right. If your soft launch cohort is full of early customers, your signal is skewed - and it's almost a failed experiment from the start.
However, if your cohort includes anyone you discounted to get through the door, that's even worse. You'll never know whether the product sold or the discount did, and your sales team inherits that ambiguity as a pricing precedent. The behavioural test is simple. Did your early adopter reorder at list or near list price? That's the signal. Did they tell you the product is great on a call and then go quiet? That's just a fan. And fans don't move markets.
Speaker: Matt Wilkinson [27:04]
It really looks to me then that we've got a real data analysis issue, but we've also got a segmentation challenge here as well. If we go to the next question - the political one - who actually owns the exit decision? Because that feels like the critical piece. How do we know we've got everything we need out of the soft launch?
Speaker: Jasmine [27:21]
Here's the version of this conversation that most product managers skip. In a lot of life science organisations, the soft launch is not a market learning exercise. R and D points to open field data. Regulatory points to unfinished qualification runs. Commercial points to insufficient reference site density. And the product manager holds the timeline and absorbs every month of delay.
The only protection is exit criteria defined in writing before the soft launch begins, owned by a named decision maker - not a steering committee - a single person. And anything that surfaces during the soft launch that was not in the original gate definition goes onto a separate list. The moment you allow scope to expand your pre-launch definition of done, no product leaves the harbour on a defined schedule.
A nine-month soft launch with shifting goalposts is not a soft launch. It is organisational indecision under better vocabulary. And it's expensive in ways most stakeholders never have to account for directly - because the product manager is the one holding the tab.
Speaker: Matt Wilkinson [28:43]
What I think you're saying is that this diffusion of risk and accountability - decision by committee - is almost the biggest challenge to a successful soft launch, as well as having fuzzy definitions, which we know are the killer of nearly every commercial process out there. So when is skipping the soft launch the right call?
Speaker: Jasmine [29:04]
That's a really important question. Treating a soft launch as a default process, regardless of context, is just as risky as skipping it. The test is simple: what specific question will this soft launch answer that you can't answer in any other way? And what decision will that answer enable? If you can't complete that sentence with something concrete, you may not need a soft launch at all.
Speaker: Matt Wilkinson [29:35]
Are there any particular situations where you would definitely skip it?
Speaker: Jasmine [29:39]
Where I might skip it: proven consumables and line extensions with identical workflows. Adding a new lot size to an established kit could add 60 to 90 days of delay for data you already have. That's an area where a soft launch doesn't answer any valuable question.
Another: where markets are too small for staged access to matter. If your total addressable market is 30 core facilities globally, a cohort of six sites is your launch. A soft launch is simply not relevant in that case.
Speaker: Matt Wilkinson [30:24]
It sounds like the soft launch has been doing double duty for years - serving as both a market learning exercise and an organisational risk distribution mechanism. And the product manager who treats it only as the first will get overrun by the political reality of the second. The product manager who treats it only as the second will generate nine months of expensive ambiguity and hand their sales team a pricing precedent they didn't learn.
From our conversation today, I think I've learned that the soft launch done well is a named question, with a named decision maker, a recruited cohort of genuine early adopters, and a hard exit set of criteria - including a date that was on the calendar well before the first site ever signed on. If you're a product manager listening to this and you cannot point to all three of these elements in your current soft launch plan - are you really running a soft launch? Or are you running a harbour with no departure schedule?
Speaker: Jasmine [31:27]
Agreed. It's a good challenge for all product managers and commercial teams to think about.
Speaker: Matt Wilkinson [31:35]
Absolutely. Well, thank you Jasmine. I've learned a lot from talking to you about this today. Really appreciate your time.
Speaker: Jasmine [31:41]
As have I. Thank you, Matt, and thank you to all of you who have been listening to A Splice of Life Science Marketing. We look forward to seeing you again soon.
Speaker: Matt Wilkinson [31:52]
See you next week.
Q&A
How do I know if my company is showing up in AI search results right now?
Open an AI platform - ChatGPT, Perplexity, or Claude - and ask the questions a researcher evaluating your category would ask. Do not search your brand name directly. Perplexity surfaces citations by default. If your assets are not appearing, you have your answer. Run this test quarterly and log the results. It takes less than an hour and the output is a real gap analysis, not a sentiment report.
Our best content is gated. Where do we start without a full content rebuild?
Pick your top five performing gated assets - typically application notes or white papers. Write a structured summary of each: a plain-language abstract, key experimental findings, named authors, and schema markup. Publish these ungated on your website in HTML format. This does not replace the gate. It feeds the bots with enough attributed, verifiable content to begin establishing your citation footprint. Start with one asset this week.
Our scientists are reluctant to put their names on marketing content. How do we handle that?
Reframe the conversation. The content that builds AI citation weight - application notes, white papers, experimental results - is not marketing content. It is scientific content with commercial relevance. If it is not rigorous enough for a scientist to attach their name to it, it is not rigorous enough to drive credibility in AI search either. Involve your field application scientists as quality reviewers. Their sign-off raises the bar and removes the perception that marketing wrote it.
How do I write soft launch exit criteria that my stakeholders will actually agree to?
Define exit criteria in writing before the soft launch begins and name a single decision maker - not a committee. The three commercial readiness tests are: can your FAS team operate without R and D support; do customers reorder at or near list price; and does your positioning language match the words customers use unprompted. If you cannot get sign-off on these criteria before launch, the soft launch is already a political exercise, not a learning one.
How do I know if my soft launch cohort is giving me useful signal?
Apply Rogers' test. Early adopters evaluate on scientific merit, buy without peer precedent, and have the credibility to influence the next wave of adoption. Early customers buy because the product is on a preferred vendor list or procurement approved it. If your cohort is full of early customers - or anyone you discounted to get through the door - your signal is skewed from day one. The simplest behavioural test: did they reorder at list price? If not, you have not validated the product. You have validated the discount.