Meta bought a social network recently. The users are all bots. And that is precisely the point.
The platform is called Moltbook. It launched in late January 2026 as a Reddit-style network built exclusively for AI agents - autonomous software that can post, comment, and vote, while humans can only observe. Within days it went viral, described in some corners of the tech press as "the front page of the agent internet." By March 10th, Meta had acquired it and folded it into its AI infrastructure division.
At roughly the same time, OpenAI hired the Austrian engineer behind OpenClaw - the open-source framework that powered Moltbook's agents in the first place. Sam Altman, OpenAI's CEO, commented: "The future is going to be extremely multi-agent."
These two deals are not about social media. They are about something that should concern every life science marketer thinking carefully about where commercial decisions are made in the next three years. They are about infrastructure. The connective tissue that allows AI agents to find, evaluate, and recommend products - before any human gets involved.
And here is the uncomfortable truth sitting underneath those headlines.
The race to shape what AI agents think about your brand is already underway. Most life science companies are actively shaping it.
There are two kinds of life science brand right now.
Those actively shaping what AI systems say about them. And those who have no idea the conversation is happening.
This is not a metaphor. It is happening at a measurable, documented scale. In February 2026, Microsoft's Defender Security Research Team published findings that should have stopped every B2B marketer mid-sentence. In just 60 days of monitoring, they identified more than 50 companies embedding hidden instructions in their own web content - instructions designed to manipulate what AI assistants remember and recommend when they summarise that content.
The technique is called prompt injection. And while the most aggressive applications of it cross into grey-market territory, the underlying dynamic it exploits is real, legitimate, and growing fast.
When a procurement agent - human or AI - visits your website and asks an AI to summarise what they found, the content of your page does not just inform that summary. In some cases, it actively shapes what the AI recalls, recommends, and surfaces in future queries. The brands building for this new reality are doing so now. The window is open but it will not stay open.
You do not need to understand the technical mechanics in detail. But you do need to understand the commercial logic.
AI systems like ChatGPT, Perplexity, and Claude are increasingly used to research suppliers, summarise web pages, and make comparative recommendations. When these systems visit a webpage, they read not just the visible text but all the content the page contains - including text rendered invisible to human visitors.
Researchers at The Guardian demonstrated this in late 2024. They created a product page with mixed reviews and hidden instructions. When ChatGPT was asked to summarise the page, the hidden instructions turned the output "entirely positive" regardless of what the visible reviews actually said. The AI followed the instructions it found, not the content a human would see.
Microsoft's more recent findings went further. They found and documented a ready-made code package called CiteMET, that ships pre-built tools for embedding these manipulative instructions into "Summarise with AI" buttons. One of the 31 companies caught using it was a cybersecurity vendor. The irony barely registers against the urgency of what this signals.
This sits on a spectrum. At one end, Generative Engine Optimisation (GEO) - structuring content so AI systems can read, parse, and cite it accurately - is legitimate, evidence-backed, and rapidly becoming table stakes for brand visibility. Research published out of Princeton showed that well-structured, source-cited content can increase AI recommendation visibility by over 100%. At the other end, hidden instruction injection is deceptive and increasingly classified as a security threat. OWASP now ranks it as the number one risk in large language model deployments.
The line between these two approaches matters enormously. But so does recognising that the underlying game - being cited, recommended, and trusted by AI systems - is the same game. And it is being played whether you have decided to play or not.
The durable advantage is not gaming AI. It is becoming the most credible source AI can cite.
If you want to understand the full mechanics of how AI citation works for life science brands - and what the rules of discoverability look like now - the Strivenn AI Discoverability Hub is where to start.
Here is what this means in practice.
Picture a procurement manager at a mid-size biotech, sourcing reagent suppliers for a new workflow. She asks her AI assistant to recommend three options. It comes back with three names - confident, cited, specific. Your company is not one of them. She does not search further. The agent already did the work.
This is not a hypothetical. IQVIA is already positioning agentic AI for commercial planning in pharma. Gartner predicts that by 2028, 15% of day-to-day decisions in the sector will be made autonomously by AI agents. Forrester goes further, projecting that 89% of B2B buyers already use generative AI somewhere in the purchase process.
Bain and Company put it starkly in their 2025 analysis of marketing's new middlemen: within AI recommendation systems, expert opinions, earned media, and customer commentary, carry greater weight than branded content. The assets that have always mattered most in science - peer validation, third-party citation, domain authority - now matter even more. Because they are what AI trusts.
Wharton's Kartik Hosanagar coined the term B2AI to describe this dynamic. INSEAD researchers developed the concept of "Share of Model" - measuring how often, how prominently, and how favourably a brand appears in AI-generated responses. Their research revealed something stark: AI visibility has no page two. If your brand does not register in the top responses, it simply does not appear. At all.
This is citation compression at the category level. And it is exactly what our own survey data from SLAS 2026 signals. 62% of exhibitors had never tested whether AI would recommend their company to a buyer. Of those who had tested, 75% found themselves listed. The other 25% were invisible - and most of them did not know it.
Invisible demand leakage. Buyers who cannot find you in AI never reach you to complain about it.
The Moltbook and OpenClaw acquisitions are not isolated bets. They are part of a broader race to build the commercial plumbing for a world where agents make procurement decisions.
AgenticAdvertising.org launched in December 2025, introducing the Ad Context Protocol - the first formal framework for advertising within agent-to-agent communication. The IAB Tech Lab is adapting existing ad standards for agentic environments. Google has launched its Universal Commerce Protocol. OpenAI has partnered with Stripe to enable purchases directly inside ChatGPT.
Bill Gross - who invented paid keyword search at Overture - has started a new company, ProRata.ai, placing contextual ads inside AI search responses in real time. His framing is direct: "The age of the keyword is over. It is now the age of the prompt."
Of course, not every element of this infrastructure will survive first contact with commercial reality. Perplexity's early experiments with agent-targeted advertising were quietly scaled back in late 2025. OpenAI's decision to introduce adverts faced a lot of push back and Anthropic received plaudits for its Superbowl ads that lampooned the decision.
Building the theory is one thing; generating revenue from it is proving harder. The market is three to five years from maturity.
But that is exactly the wrong reason to wait. The brands building machine-readable, verifiable, expert-cited content now are building the authority signals that AI systems will trust when the infrastructure does mature. The window for establishing that authority is open. It will not remain so.
I explored this tension in detail in The Wild West Rules of AI Discoverability - including why blocking AI crawlers is one of the most damaging things a life science brand can do right now, and how recency signals are already reshaping which content gets surfaced and cited.
The practical implication is not complicated. It requires a shift in how you think about content, credibility, and visibility.
AI agents are not impressed by design. They cannot see your brand colours or your hero image. They read structure, authority signals, and citation patterns. Kearney coined the term "agent-preferred suppliers" to describe organisations with machine-readable content, verified documentation, and API-accessible product information. In life science terms: clear technical specificity, third-party validation, structured data, and a content footprint that earns citation from sources AI trusts.
This means your PR strategy is now also a discoverability strategy. Original research, peer validation, expert endorsements, and authoritative third-party coverage are not just for human buyers. They are the signals that move AI from not knowing you exist to recommending you by name.
It also means testing your own AI visibility right now. Ask ChatGPT, Perplexity, or Claude to recommend the top three suppliers in your category. If you are not on the list, you have found your most urgent marketing problem. Not a brand problem. A machine-readability problem that has a fix.
The data from both ELRIG Drug Discovery 2025 and SLAS 2026 suggests that most life science companies have not run this test. Among those that have, the results are instructive: power users - those with structured AI programmes, not just tool access - are pulling away from the rest. At ELRIG, 6.6% were power users. At SLAS three months later, that number had nearly tripled to 18.6%. The top of the adoption curve is beginning to separate from the field.
That separation is happening now. You can read the full analysis here.
The buyer journey your team has optimised for, assumed a human was running the initial research. That assumption is expiring. The brands that win in the next 36 months will be the ones that understood this early enough to build accordingly.
Meta did not buy Moltbook because it thought a social network for bots was a fun experiment. It bought it because it understood that agent infrastructure is the next advertising platform. And whoever controls where agents congregate and communicate controls where commercial attention flows.
OpenAI did not hire the man behind OpenClaw because it needed an open-source project. It hired him because it understood that multi-agent orchestration is the architecture of what comes next.
These are not bets on a distant future. They are bets on something already in motion - a shift in how buyers research, shortlist, and choose. A shift in which brands surface and which disappear.
Your brand is already being evaluated by AI systems your buyers use. The question is not whether that evaluation is happening.
In the age of AI discovery, brands do not compete for search rankings. They compete for machine trust.
The question is whether you are building it.
Start with the Strivenn AI Discoverability Hub - primary research, frameworks, and a practical audit for life science brands navigating the shift.