Life Science Marketing Insights & AI Strategy | Strivenn Thinking

Congratulations. Your AI Stack Is Now Infrastructure.

Written by Matt Wilkinson | Apr 22, 2026 10:15:00 AM

Congratulations. Your AI stack is now infrastructure. You probably did not notice when that happened. And that is the problem.

Henry did not notice either. Henry is the CEO of a life science tools company, he represents a number of clients and colleagues I've spoken to throughout the past year. He is purely fictional, but I've heard this story time and again.


Henry leads a team of forty three employees. The company is growing fast. Operating in a market where tariff shifts, funding cycles, and the relentless noise of AI adoption have made every strategic decision feel simultaneously urgent and uncertain.


He made the best calls he could. That is the point. And yet, he still ended up in a mess.


Intelligence so cheap it felt irresponsible not to use it

Henry first noticed the shift in early 2024. Not because he was watching for it - he was managing a board that wanted growth, while his largest US distributor was absorbing the early tremors of trade policy uncertainty. His team was stretched. Headcount was under pressure.


Then his marketing manager started using ChatGPT to draft campaign copy. And something clicked.


The output was not just acceptable. It was good. Fast. Iterative in a way that human drafting cycles never were. His sales team started using AI to prep for calls and came in sharper, better briefed, more confident. A researcher in applications support quietly automated two hours of weekly report summarisation and spent the time on work that actually needed her.


None of it cost anything meaningful. In a market generating more AI adoption noise than any leadership team could reasonably filter, Henry's instinct cut through: this was not a trend to monitor. It was a competitive edge sitting on the table.


So he moved.


By Q3 2024, Henry had invested in enterprise licences across the commercial team. AI was embedded across the marketing function. His marketing manager was producing three times the content at the same cost. Campaign turnaround that used to take two weeks was taking three days. The quality held. The board was pleased - and board pressure had been a constant backdrop for eighteen months.


Then Klarna published its results. AI agents doing the work of seven hundred full-time employees. Forty million dollars in annual profit improvement. Henry forwarded it to his CFO with a single line: "We need to talk about resourcing."


The best decisions he could make

What happened next was not reckless. Every individual decision was defensible. All of them made sense given what Henry knew at the time.


He restructured thoughtfully - chose not to backfill two positions when people left, redistributed responsibility to tools that were already outperforming. The headcount pressure eased. The output did not drop. In a climate where demographic shifts were already thinning the talent pool for specialist commercial roles, not backfilling roles felt prudent.


Then came the regulatory workflow. Veeva had launched its AI Agents suite. The MLR Bot - validating marketing materials against brand guidelines and regulatory requirements - was exactly what the team needed. Review cycles that had taken two weeks came down to three days. Compliance confidence went up.


Henry approved it. It solved a real problem. It saved real time. It sat inside Veeva, which the company already used, which already held eighty per cent of the industry's CRM market, which already had the compliance configurations in place.


The geopolitical noise was still running in the background. Pricing pressure from US tariffs was reshaping his cost base. The board wanted margin improvement. The MLR Bot delivered it. The decision was rational given every input available at the time.


He did not get locked in. He got efficient. He did not lose control in a single decision. He lost it one reasonable decision at a time.


It felt like the future. Because it was.


Something started shifting in the output

The first sign was easy to dismiss. In late 2025, a model update was pushed in one of their key tools. Routine, unannounced, covered in the platform terms Henry had not had time to read carefully.


Within two weeks, the marketing team noticed something was off. Subject lines that used to land with a certain precision were drifting. The MLR Bot was flagging content it had previously cleared, and clearing content the team would have previously queried. The tone in automated copy had shifted in ways that were subtle but real - different vocabulary patterns, different sentence rhythm, different risk calibration in the compliance layer.


Nobody could point to a single failure. It was more like the instrument had been recalibrated without telling the musicians.


Henry asked the obvious question: can we roll back?


The answer was no, the vendor controlled the model selection. The update was live across all accounts. If Henry wanted the previous behaviour, he would need to reconstruct it through prompt engineering - weeks of work with no guarantee of replication - or consider moving the workflow to a different provider.


That was when the audit began.


What the audit revealed was not one problem. It was a stack.

Five layers. Each compounding the one below it.


  • API lock-in: switching models requires six to twelve months of refactoring without a middleware layer in place.
  • Model lock-in: fine-tuned variants are proprietary and cannot be exported.
  • Embedding lock-in: the entire vector database needs rebuilding if the provider changes.
  • Prompt lock-in: prompts tuned to one model's behaviour break on any alternative.
  • Workflow lock-in: agentic systems are woven into governance and observability stacks simultaneously.

Each layer did not just add cost. It multiplied it.


The cost of switching is not what anyone expected

The question put to the consultant was straightforward: if we needed to move any of these workflows to a different provider in the next twelve months, what would that involve?


The answer took three weeks to compile.


The MLR workflow, now embedded in Veeva's validated environment, was not portable. Moving it would trigger a formal re-validation event under GxP regulations - IQ, OQ, PQ, compliance documentation, potential regulatory re-audit. The consultant estimated six months and a budget north of a hundred thousand pounds. Not because anything had gone wrong. Because that is the cost of switching a validated process.


He automated work. He outsourced control.


Here is the part nobody publishes

Eighteen months after replacing hundreds of employees with AI agents, Klarna was quietly rebuilding the human layer it thought it had eliminated. Quality had become a binding constraint. Customer experience had degraded in ways that were hard to measure until they were very hard to ignore.


Klarna is not in a regulated industry. The cost of reversing course was reputational and operational. They could reverse it.


Henry's MLR workflow could not be reversed in a quarter. The humans who had run it were gone - not fired, just not replaced during a period when every headcount decision was being scrutinised. The institutional knowledge had diffused. And the compliance clock on re-validation was not something he could negotiate around.


The resolution is not a warning. It is a design principle.

Henry's story does not end badly. That is the point.


He commissioned the abstraction layer. He ran a portability audit across every AI-dependent workflow. He mapped each touchpoint against its regulatory classification and separated validated processes from experimental ones - deliberately, not reactively.


In practice, an abstraction layer in a life science commercial stack means routing all model calls through a single middleware interface - tools like LiteLLM or a custom API gateway - so that swapping the underlying model becomes a configuration change rather than a six-month engineering project. It means storing embeddings in systems you control, not inside a vendor's managed environment. And it means a clear map of which AI tools sit inside validated processes and which do not - because those two categories carry entirely different switching costs and deserve entirely different governance.


None of this was anti-AI. Henry still believes AI is the most significant operational shift of his career. The productivity gains are real. The competitive pressure to adopt is real. The VUCA (Volatility, Uncertainty, Complexity, and Ambiguity) environment that pushed him to move fast was real.


What changed was the architecture. He stopped treating AI tooling decisions as procurement choices and started treating them as long-duration commitments - the kind that deserve the same scrutiny as a facility lease or a distribution agreement.


The question is not whether you should adopt AI. That decision has already been made for you. Henry made it in Q3 2024. So did most of your competitors. The question is: when the model changes without warning, are you still in control of your business?