S2 Ep7: The Human Edge: Trade Show Trust and Why Choose You?
By Matt Wilkinson
AI owns the information war. Your booth and your team's judgment are all that's left — and both need deliberate protection. For life science marketers.
Shownotes
Matt and Jasmine debate whether generative AI search represents a branding opportunity or a performance marketing channel for life science companies. The conversation unpacks the mechanics of AI discoverability, examining conversion data (23x better than organic search), volume reality (still <1% of total traffic), and the strategic implications for B2B life science marketers.
Key tensions explored: incumbents vs newcomers, content quality vs volume, brand visibility vs measurable outcomes, and the 3-6 month timeline for GEO results. The discussion reveals how query fan-out, training data dynamics, and platform-specific biases (Grok over-indexes X, ChatGPT favours Reddit) create a probabilistic rather than deterministic visibility model.
Core argument: AI search isn't an extension of SEO - it's a different game rewarding structured, answer-ready content over traditional optimisation tactics. Small players can compete through specificity and recency, but incumbents hold trust signal density from years of indexed content.
Actionable close: Test your visibility quarterly across ChatGPT and Perplexity with category questions and specific use cases. The gap between results maps your competitive position in AI discovery.
Transcript
Jasmine (00:02): Hello, now.
Matt (00:03): Hey Jasmine, how you doing?
Jasmine (00:05): I'm doing really well. What about yourself? Yeah, yeah. Yeah, me too. Getting ready for Boston and the cold weather. So today, we're gonna talk about a story that resonated with both of us. Right now, while you're reading this, an AI is deciding whether your brand exists, not in a philosophical sense, in a practical commercial one. A life science researcher is asking ChatGPT or their favourite LLM for recommendations on flow cytometry systems. Another is using Perplexity to compare qPCR platforms. Your product might be brilliant, your content might be comprehensive, but if AI can't find you, cite you, and recommend you, you're invisible.
Matt (00:07): I'm good, thank you. Looking forward to this one today.
Jasmine (00:34): For the first time since Google's early days, we're watching the rules of discoverability get rewritten in real time. And most life science marketers haven't noticed yet. The article that you put together maps what we know about the current discoverability landscape in detail. It covers the mechanics of query fan out, which is how a single user question spawns dozens of synthetic queries behind the scenes. The economics of AI crawling versus referral, training bots take vastly more than they give back. And you also talk about the tactical shifts required to remain visible. The core argument is that AI search is not an extension of traditional SEO, as one of our friends, Sarah Stahl says, it's a different game. It operates on fundamentally different logic, rewards different content structures, and creates a probabilistic rather than a deterministic visibility model. The piece draws on a range of sources, including a discussion between Rand Fishkin, the founder of SparkToro, and Mike King of iPullRank. So Matt, why should life science companies care? Yeah, please go for it.
Matt (02:32): Well, the commercial stakes are real, but they're really unevenly distributed. So AI search traffic converts at extraordinary rates, about 23 times better than organic search. And that's according to Ahrefs data. But those people landing on your website from AI search currently accounts for less than 1% of the total website traffic in general. So the gap between the quality signal and the volume reality is the central tension for any marketing team deciding where to invest right now. At the moment, Google is still driving the vast majority of traffic to websites, but that doesn't mean you can really ignore what's going on in AI because very often people aren't actually going to websites until much, much later in their buying journey anyway. It means that people are staying within the chatbot interface for longer. They're creating their own buying guides and fundamentally changing the truths that digital marketers have clung onto over the last decade or so.
Jasmine (03:41): Yeah, so I found this really really interesting, especially from a performance marketing perspective. The B2B life science context completely changes the maths perspective. When a single deal is worth six or seven figures, you don't need volume. You need qualified buyers and that's exactly what AI search delivers because they've done their homework through their favourite LLM. They've done the compare and contrast. They may use different models within their LLM like deep research to really go into the details and they ask their LLM if the LLM doesn't even offer it on its own, what it would recommend as a short list of what to buy.
Matt (04:40): Yeah, it's an interesting point and you know, I think stats from Forrester say that 90% of all B2B purchases currently use some form of generative AI to either do deep research or to compare and contrast information along that buying journey. I think it's really, you know, it's a really important place to be visible. But I would argue that it's not a performance channel, but it's more of one for branding. And that's sort of what Mike King says in his interview with Rand Fishkin. You know, and he was the 2025 Search Marketer of the Year, and he's done a lot of client work and a lot of research. I would contend that these platforms are more branding channels than they are performance. Your return on investment is going to be reasonably low at least right now. I wouldn't dismiss AI search as a way for actually being important. In fact, I think it's vitally important that our brands are visible in AI search. I just think that it doesn't fit within the performance marketing bracket.
Jasmine (05:51): You know, it's interesting because I've heard Sandy Carter, who's recently taken over the Marketing Companion podcast from Mark Schaefer, say that there is evidence that the traffic from GPT converts at a much higher rate than traditional. And, you know, even King's own client data contradicts the branding only thesis. Financial services case study in his article shows a 121% increase in signups and almost 53% increase in organic search traffic from AI search optimisation work. Those are performance metrics, real signups, real traffic. The iPullRank team isn't running a PR campaign as part of that. They're engineering visibility in a retrievable system and the business outcomes are measurable.
Matt (07:00): Look, those stats are very, very impressive. The challenge is that the volume reality hasn't yet changed yet. Google still drives 345 times more traffic than all AI platforms combined. You know, the 23 times conversion stat, you know, it's website data. It makes a lot of sense. And I think it goes to show how important it is to be visible. But because we're not able to go out and look at keywords, this works out a very different way of being visible. Really what you're trying to do is make sure that your brand turns up in the same conversation as the search terms, the questions that your customers are going to be asking. So for me, I still think it sits more in the brand visibility bucket. But then I might be the sort of person that will put a trade show as well in both buckets. You know, it's both brand visibility and branding is incredibly important at a trade show, but it's also important as a performance. And so maybe there's that case where it's a brand performance channel rather than purely either one or the other.
Jasmine (08:24): Yeah, it's an interesting way to look at it because, you know, is that grey line between the two very, very muddled, if you will. You know, if you think about what a lot of these LLMs are doing, they're making recommendations. So to me, that's a conversion metric. Once you're jumping into that recommendation bucket and the inflection point from the branding signal to the performance channel is a question maybe of when not whether, and by the time it's obvious, the early mover advantage is gone.
Matt (09:11): Yeah, and I think one of the really important things right now in my mind at least and this is unproven is that being able to be found in AI search, the way the bots work, there is an argument to say that if you're being found in AI search, you're more likely to end up in the training data for the next model. And if you're there and those probabilistic connections are being made between your brand and something else right now, the more that you turn up in that data, the more likely you are to be the brand of choice for any particular query in the future. And I think that it's really about making sure that content is structured well. And I mean, I've spent quite a bit of time thinking about this and actually playing with the tools and trying to understand it. A well-structured, you know, piece of content is likely to perform better. And then I think smaller players can really hold their own against incumbents because it is much less about the amount of content and much more about the quality and the timeliness and whether or not it's answer-ready.
Jasmine (10:41): I really, I really love this argument because I feel like as a marketer, you know, I've always looked for opportunities for the little guy to compete against the big company. And if we can get teams to understand that AI friendliness rewards structure and specificity and they can go out and find these long tail opportunities to speak to very specific use cases or very specific workflows or pain points within their product categories, this can be a great way to level the playing field with just a bit of extra time and a bit of thoughtful content. You know, so, and that means that sort of startups or smaller budgets can actually compete where they maybe couldn't before in sort of, you know, contested and contentious sectors. So for me, content quality seems to matter more than budget in AI discovery and a focused application guide, you know, well orchestrated, well marked up on the website, targeting specific job to be done, specific questions, can really outperform generic overviews from much larger competitors. Often when AI systems can't even get into those, can't make sense of the offer of those companies. And I think it's particularly helpful because AI systems really reward specificity and structured answer ready content over volume. And we know that so many web projects within large organisations really can't move quickly in big orgs.
Jasmine (16:35): Fair enough. At the same time, you know, there is a bit of first mover advantage, a bit of an incumbent advantage. You know, AI systems clearly learn from what already exists on the web. Incumbents to a particular category, you know, for example, liquid handlers have years of forum citations, review mentions, technical comparisons, indexed documentation that has been absorbed by your favourite LLM. This isn't just content volume. These are trust signal densities and AI systems weigh it heavily when assembling recommendations.
Matt (17:27): That is true, but I think it neglects the recency bias. And so we know that AI search does prefer more recently updated content. And we know so many big organisations, they put pages live and then they're forgotten about and they exist until the next big web refresh. And maybe even aren't even rewritten as part of that. They just moved over to the next platform. So the barrier for entry for GEO is lower than for almost any other discovery channel. And you don't need massive ad spend or a PR team. You just need to do a lot of well-structured content that answers the exact question a researcher is prompting. And really being able to make sure that you're answering those questions and all of their permutations in the right way.
Jasmine (18:16): Without a doubt that answering those questions in the right way, in an AI friendly way and getting your persona AIs to weigh in on what those friendly ways would be is super important. At the same time, GEO optimisation, we believe it takes roughly three to six months to show results. So for a niche player entering a crowded category, that runway assumes you have the resources and patience to wait. Most life science teams launching a new tool don't have that luxury more often than not.
Matt (19:04): And that's fair. I think it also potentially changes the way that we think about product launch. Because whilst AIs do love that recency of content, we know that we need to be making sure that we're getting the right signals to them and we've got to build that, you know, make sure that our content is being spread through the networks that they search and they want to crawl. But I think it does maybe mean that we have to start thinking a little bit earlier. How do we start making sure that we're not tipping our hand, but we're answering questions about the topics, even if we're not giving information about the product itself. That for me still feels like it's a, you know, we're in this wild west of AI search discovery at the moment where, you know, the rules are changing all the time. They're not fixed, you know, even though I spent a lot of time trying to understand what was going on in the space, there's still conflicting information out there. And so I think that smaller players that earn AI recommendations can, you know, usually doing so because they've got higher quality visitors and they've got higher quality content and that leads to higher quality visitors.
Jasmine (20:16): Yeah, I think you've got a point here and this is something I mentioned in my blog post that AI friendliness and timing is butting up against product confidentiality in a new product development process. And I think this behooves the heads of product management to really think a little bit outside the box and build new workflows to be able to leak some information earlier on to help with this sort of three to six month adoption time for the GEO. So, you know, maybe you have your product managers on ResearchGate, on Reddit, and they're replying to technical questions of an area that they're not really well known for yet. So it's sort of an easy early teaser for a new product that's about to be launched in a new category. That's just one example of how they may want to get ahead of the curve in order to play to the game a bit.
Matt (21:44): Yeah, and that makes sense, although it can be hard for larger organisations to allow people to go to their social media sites and take those actions, which I think plays into the hands of it being a bit of a leveller. But I think that there's a real point that's definitely worth acknowledging that incumbents do have lots of index content and typically have bigger forum presence. They've got teams that they could and can deploy to work on that stuff. But it's worth knowing that index content and AI recommended content aren't the same thing. And volume of existing content doesn't necessarily translate into quality of recommendation. But of course, the more that you're there, the more likely you are to be part of that question, you know, to have answered a question. So I think that there's probably a safe middle ground here, but it really is difficult to know exactly how things are going to change. And we're still having to experiment to understand these probabilistic machines. And even within that statement, we know already that Grok and ChatGPT and Claude and Gemini and Perplexity, they all answer questions in different ways. Their search algorithms all work off different probabilistic mechanisms. They all have access to different data. So for example, Grok will over, you know, it's trained on an awful lot of information from X and Grokopedia. But also you've got ChatGPT that has historically always overrepresented Reddit data because of the licensing deal that OpenAI did with Reddit to get access to their data. So again, it becomes down to a bit of a training data and access challenge as to where different tools actually have access to and which bots are being blocked on which platforms.
Jasmine (23:51): So I think this sort of lends itself to product managers, marketing managers, rethinking maybe one of the tasks in their workflow. And let's say once every couple of months, spend an hour prompting ChatGPT and Perplexity just to pick two random LLMs with two queries. The first, the category question your product answers, and then the specific use case it targets. And see who shows up in each of these prompts and each of these queries. And that gap between these two results is your competitive map for AI discovery and it costs nothing except a bit of time, but it can help you stay ahead of how you want to tweak your content on your already launched products and how you might want to expand your content as you're getting ready to launch a new product.
Matt (25:02): And I think that's great advice for us to finish on. I think we really have to be considering AI search, not just as another channel, but also treating the bots as new members of the decision-making group, new influences that we have to be marketing to as well. And that's going to be a mindset shift that it's going to take time for a lot of people and especially because those bots are changing all the time.
Jasmine (25:31): The bots are changing all the time and the bots are becoming the ultimate influencer for your customers, even in the sciences. Yeah, absolutely. Thanks, Matt. This, as always, has been a great discussion. Absolutely. Bye for now.
Matt (25:42): And that's definitely something we can agree on.
Matt (25:50): That was a lot of fun. Look forward to the next one. Thanks, bye.
Q&A: AI Search Strategy for Life Science Marketing
Q: What is the core difference between traditional SEO and AI search optimisation?
A: AI search operates on fundamentally different logic. Traditional SEO is deterministic - you optimise for keywords and track rankings. AI search is probabilistic. It rewards structured, answer-ready content over keyword density. A single user question spawns dozens of synthetic queries behind the scenes (query fan-out), and visibility depends on whether your content matches those probabilistic connections, not just keyword placement.
Q: Is AI search a branding channel or a performance marketing channel?
A: It's both, and the debate matters. AI search traffic converts 23 times better than organic search (Ahrefs data), but represents less than 1% of total website traffic. For B2B life science, where single deals are six or seven figures, you don't need volume - you need qualified buyers. The question isn't whether AI search delivers performance, it's when the inflection point from brand signal to performance channel becomes obvious. By then, early mover advantage is gone.
Q: Do incumbents have an unfair advantage in AI discovery?
A: Yes and no. Incumbents have years of forum citations, review mentions, technical comparisons, and indexed documentation absorbed by LLMs. These are trust signal densities that AI systems weigh heavily. However, indexed content and AI-recommended content aren't the same thing. AI search prefers recency, structure, and specificity over volume. Large organisations often let pages go stale between web refreshes. Smaller players can compete through well-structured application guides targeting specific jobs to be done.
Q: How long does GEO take to show results?
A: Roughly three to six months. This timeline creates tension with product confidentiality in new product development. You need content live before launch to benefit from GEO during launch windows. The workaround: product managers engaging on ResearchGate or Reddit, answering technical questions in adjacent areas before product announcement - seeding the category without tipping your hand.
Q: What's the practical testing framework for AI visibility?
A: Quarterly, spend an hour prompting ChatGPT and Perplexity with two queries: (1) the category question your product answers, and (2) the specific use case it targets. See who shows up in each. The gap between these two results maps your competitive position in AI discovery. It costs nothing except time and reveals where you need to strengthen content.
Q: Do different AI platforms favour different content?
A: Absolutely. Grok over-indexes content from X and Grokopedia. ChatGPT historically overrepresents Reddit data due to OpenAI's licensing deal. Claude, Gemini, and Perplexity each have different search algorithms, probabilistic mechanisms, and training data access. Your visibility varies platform by platform. Test across multiple systems, not just one.
Q: What does "treating bots as new members of the decision-making group" mean?
A: It means recognising that AI systems aren't just search tools - they're influencers. Ninety per cent of B2B purchases use generative AI for research or comparison (Forrester). Buyers are staying inside chatbot interfaces longer, creating their own buying guides, and arriving at your website much later in the journey. You're no longer just marketing to humans. You're marketing to the probabilistic systems that shape their shortlists before you ever see them.
Q: Does being found in AI search now affect future model training?
A: Unproven but plausible. If you're being found in current AI search, you're more likely to end up in training data for the next model generation. Those probabilistic connections between your brand and category questions compound over time. The more you appear in training data, the stronger those connections become. This is speculative but suggests early investment in AI visibility has compounding returns beyond immediate traffic.