Strivenn Thinking

Separating Signal from Snake Oil in AI Search

Written by Matt Wilkinson | Mar 4, 2026 11:03:20 AM

Your agency just pitched something called Answer Engine Optimisation to you. The deck was confident. The retainer was not cheap. Before you redirect budget, you need to know what Google's own search team actually says about this.

 

In August 2025, John Mueller - Google's senior search analyst - described the proliferation of AEO and similar acronyms as spam and scamming. A month later, Danny Sullivan, Google's Search Liaison, was more measured but equally clear: AEO and GEO are a subset of SEO, not new disciplines. Separate branding for an existing practice.


This does not mean ignore it. It means calibrate.


The shift in buyer behaviour is real. The question is how real, how fast, and where it actually creates risk for a life science tools company scaling past ten million in revenue.


What the data actually shows

AI-powered tools currently account for roughly 0.77% of US desktop web activity. That sounds small. It is small. But it has tripled in twelve months, and referral traffic from large language models grew 155% over eight months versus 24% for traditional search. The trajectory is what matters, not the current volume.


Google still processes approximately 5 trillion searches per year and holds around 90% of the search market. It is not going anywhere. But AI Overviews now appear in 18-25% of queries, and when they do, the link in the first position in Google organic results lose 34.5% of their clicks. The structural disruption is not hypothetical. It is already visible in publisher analytics across verticals.


So: AI search is growing fast from a small base. Traditional search is vast but showing structural cracks. The prudent position is not to pivot. It is to hedge.


The conversion claims deserve scrutiny

The headline figures circulating in agency decks deserve a closer look. You have probably seen the "23 times higher conversion rate from AI referral traffic" claim attributed to search experts Ahrefs. It is real - but it describes one company's internal data, where their audience is SEO professionals actively seeking the Ahrefs product. It is not a universal benchmark.


Across 54 websites, one analytics study found AI referral traffic converting at 4.87% versus 4.60% for organic - a gap so small it is not statistically significant. A 973-site e-commerce study found ChatGPT referrals underperforming traditional channels on conversion rate and revenue per session.


The defensible middle position is this: AI referral traffic does tend to convert above average because users arrive pre-qualified. They have already asked questions, assessed options, and formed intent inside the AI conversation before reaching your site. But the magnitude varies from 1.2 to 23 times depending on industry, conversion type, and measurement period. Anyone presenting a single multiplier as the expected outcome is selling you a number they cannot support.


Three patterns we keep seeing

Across Strivenn's exhibitor surveys at ELRIG Drug Discovery 2025 (n=107) and SLAS 2026 (n=43), three patterns have emerged consistently enough to name. They are not hypotheses. They are observations that held across two conferences, two geographies, and three months.

 

 

The Unconsidered Set: The state of being absent from AI-generated consideration before you know you are absent. Sixty-two percent of life science exhibitors at both events had never asked an AI to recommend companies in their category. Buyers who cannot find you in AI results never reach you to complain about it. You do not lose the deal. You are simply never in the running.
The Programme Gap: The consistent pattern where access to AI tools without structured adoption produces near-zero commercial return. Forty-four percent of respondents at both ELRIG and SLAS had AI tools in their organisation without any structured programme to use them. The revolution is happening at the individual level while organisations stay passive. The gap between tool access and operational advantage is not closing.
Citation Compression: The structural force that is permanently narrowing who gets found in AI-mediated discovery. Research consistently shows that only five brands appear in approximately 80% of AI responses per B2B category. A search engine had ten positions on the first page. An AI has a recommendation list. Citation compression describes what happens when that list consolidates - and why the window to act is not hypothetical.

 

 

These three patterns are not independent. The Unconsidered Set grows because of the Programme Gap - teams without structured AI adoption are the least likely to run the discoverability test that would reveal their absence. And Citation Compression makes the consequence of that absence compound silently. The longer you are absent from AI consideration, the more established your competitors become as the default references that models return.


What to invest in, what to ignore

The underlying practices that drive AI citation are neither new nor controversial. They are the same things good content strategy has always required: structured content, clear entity definition, authoritative authorship, and consistent brand presence across platforms. The 20-30% budget uplift needed to do this well is an extension of what your marketing team should already be doing.


AI models select content differently from search engines. Brand recognition is a stronger predictor of AI citation than backlinks. Entity consistency across platforms matters more than domain authority score. Content structured in self-contained, information-dense chunks is extracted at higher rates than flowing narrative prose.


Invest in: consistent entity presence across your website, LinkedIn, Crunchbase, and Wikidata; named expert authorship on technical content; ungated, structured versions of your most credible scientific claims; and quarterly content freshness updates on your highest-value pages.


Be sceptical of: proprietary AEO scoring tools with no published methodology; agencies guaranteeing AI citation without disclosing which platforms or how they measure it; and conversion benchmarks from single-site studies presented as industry standards. Google's own search team has called the more aggressive version of this pitch spam and scamming.


The window is shorter than you think

Citation Compression is already operating. Only five brands appear in approximately 80% of AI responses per B2B category. The brands establishing AI visibility now are setting the baseline that models will default to for years. Training data is not refreshed continuously. The models default to the data on the internet as it existed when the model was trained.


Across both ELRIG and SLAS, power users of AI tools nearly tripled between October 2025 and January 2026 - from 6.6% to 18.6% of exhibitors. Now the reasons for this could be geographic or due to increasing adoption over time, but either way this is not incremental change. That is the top of the adoption curve beginning to pull away from the rest.


This matters for AI discoverability because power users are the ones actively creating content, building communities, referencing suppliers, and generating the third-party presence that feeds AI training data. Being the brand that power users reference is a structural input into future citation rates, not a vanity outcome.


Among those who had checked whether AI recommends their company, 75% found themselves listed. The opportunity is real. The problem is that 62% of the sector had never looked - consistently, across two events, two continents, three months.


If AI defines your category and you are not in that definition, you are not competing - you are excluded.


Your agency's deck might have been overselling. The underlying dynamic it described is not.