logo BOOK A GROWTH CONSULTATION
logo header splice of life rectangle 2

 

Podcast

S2: Ep 17 When Did Your AI Stack Become Infrastructure?

By Matt Wilkinson

Life science CEOs embedding AI in compliance workflows face regulatory switching costs, not just technical ones, when models change.

 

 

Shownotes

You didn't make one big decision to hand control of your compliance workflow to an AI vendor. You made five small ones, and each felt completely reasonable at the time. By the time the model update arrived, the exit cost wasn't a sprint of prompt re-engineering. It was a revalidation programme.

 

This episode is for CEOs and commercial leaders at life science tools companies who are scaling AI across their teams and have not yet drawn the line between experimental workflow and validated process.

 

Matt and Jasmine walk through the story of Henry, a composite built from real conversations with life science tool CEOs, who adopted AI-first operations, hit a model deprecation event, and discovered that the productivity gains he had built his headcount decisions on were sitting on infrastructure he did not control. The conversation unpacks the five decisions that created the problem, the control layer architecture that solves it, and the two-column framework every CEO should run this week.

 

The core idea: embedding AI inside a validated compliance workflow does not make you more productive. It makes you dependent. And the switching cost is not technical. It is regulatory.

 

What you will learn:

- Why each of Henry's five AI adoption decisions felt low-risk and why together they created a structural dependency

- What changes the moment AI enters a GXP-adjacent validated process and why that is a different category of commitment

- What a control layer is, why it matters, and how tools like Open Web UI sit in that role

- How to split every AI tool you use into two buckets: validated process or experimental workflow

- Why the humans who understood the process before AI ran it are not optional infrastructure

- What question to ask before embedding any AI tool in a compliance workflow: if this changed tomorrow, could I swap it in a week?

 

Keywords:

AI governance life sciences, validated process AI, GXP AI risk, AI infrastructure life science CEO, model deprecation compliance, control layer AI, AI workflow switching costs, life science marketing AI, regulatory AI risk, AI stack governance, life science tools company AI, AI compliance workflow

 

Subscribe to A Splice of Life Science Marketing for sharp, commercially grounded conversations on strategy, AI, and go-to-market for life science brands.

 

Read the full blog post

 

You didn't make one big decision to hand control of your compliance workflow to an AI vendor. You made five small ones, and each felt completely reasonable at the time. By the time the model update arrived, the exit cost wasn't a sprint of prompt re-engineering. It was a revalidation programme.

This episode is for CEOs and commercial leaders at life science tools companies who are scaling AI across their teams and have not yet drawn the line between experimental workflow and validated process.

Matt Wilkinson and Jasmine walk through the story of Henry, a composite built from real conversations with life science tool CEOs, who adopted AI-first operations, hit a model deprecation event, and discovered that the productivity gains he had built his headcount decisions on were sitting on infrastructure he did not control. The conversation unpacks the five decisions that created the problem, the control layer architecture that solves it, and the two-column framework every CEO should run this week.

The core idea: embedding AI inside a validated compliance workflow does not make you more productive. It makes you dependent. And the switching cost is not technical. It is regulatory.

  • Why each of Henry's five AI adoption decisions felt low-risk and why together they created a structural dependency
  • What changes the moment AI enters a GXP-adjacent validated process and why that is a different category of commitment
  • What a control layer is, why it matters, and how tools like Open Web UI sit in that role
  • How to split every AI tool you use into two buckets: validated process or experimental workflow
  • Why the humans who understood the process before AI ran it are not optional infrastructure
  • What question to ask before embedding any AI tool in a compliance workflow: if this changed tomorrow, could I swap it in a week?

Keywords: AI governance life sciences, validated process AI, GXP AI risk, AI infrastructure life science CEO, model deprecation compliance, control layer AI, AI workflow switching costs, life science marketing AI, regulatory AI risk, AI stack governance, life science tools company AI, AI compliance workflow

Subscribe to A Splice of Life Science Marketing for sharp, commercially grounded conversations on strategy, AI, and go-to-market for life science brands. Full post referenced in this episode at strivenn.com. If you recognise Henry in your own organisation, come and find us.

Transcript

Below is the full transcript of this episode of A Splice of Life Science Marketing. Matt Wilkinson and Jasmine explore the story of Henry, a fictional but composite life science tools CEO, to unpack how incremental AI adoption decisions create regulatory infrastructure risk, and what leaders can do about it before a model update exposes the gap.

Opening: The Question That Frames Everything

Speaker: Matt Wilkinson

When was the last time your buyer was actually in the room when you made a decision about them? Hi, I'm Matt.

Speaker: Jasmine

And I'm Jasmine. This podcast exists because that question matters more than most marketing teams realize. Let's get into it.

Speaker: Matt Wilkinson

Jasmine, how you doing?

Speaker: Jasmine

Good, good. How about yourself?

Speaker: Matt Wilkinson

Very well, thank you.

Introducing Henry and the Blog Post

Speaker: Jasmine

So let's kick it off. Today we're going to talk about a recent post you wrote where it opens with a line that really stopped me. Your AI stack is now infrastructure and you probably didn't notice when that happened. I want to start there. Who is Henry and why does he matter?

Speaker: Matt Wilkinson

Henry is fictional. He's built from real conversations I've had with life science tool CEOs over the past year or so. Fast growing companies where they've tried to adjust to uncertainty of funding, uncertainty. And rather than growing the team, trying to make sensible decisions and applying AI.

Speaker: Jasmine

So what was the trouble?

Speaker: Matt Wilkinson

He adopted AI first, marketing productivity went through the roof, compliance review cycles dropped. The board was really happy. But then once you start getting routine model updates, buried somewhere in the platform terms, that changed how the tools behaved. Those tools were now embedded in improved workflows and they weren't able to be rolled back because the old models were no longer available. So getting the previous behavior back meant weeks of prompt re-engineering with no guarantee of success or moving the workflows to different providers in time.

Speaker: Jasmine

So how much would this cost?

Speaker: Matt Wilkinson

Well, I mean, it depends on how embedded it is, but you can imagine months of work trying to make sure that everything's re-engineered. It's just a lot of rework going back through the -- it's not just fixing the prompts, it's going back through any of the regulated process that you have. If you're operating under ISO or even greater requirements, you have to go through and make sure everything's validated, and that can take a long time.

So that's really the issue. It's not necessarily just the re-prompt engineering. It's going through the formal process at every step in those processes.

The Commercial Pattern: One Level Down

Speaker: Jasmine

So when I put my commercial leader hat on, I read that and immediately thought about how many teams are doing the same thing right now, just one level down -- not a single CEO level infrastructure call, content workflows, claims tools, campaign automation, same pattern, smaller scale, same loss of control when something underneath changes.

Speaker: Matt Wilkinson

That's the argument. It's the same mechanism, just a different scale.

The Five Decisions: How Henry Got Here

Speaker: Jasmine

Okay, so can you walk me through the sequence? Henry made five or six decisions before he realized he had a problem. What were they and why did each one feel reasonable at the time?

Speaker: Matt Wilkinson

Well, each decision felt very, very reasonable. The first decision was the easiest. His marketing manager started using ChatGPT to draft campaign copy. The output was good and quite fast. His sales team used AI to prep for customer calls. A researcher automated two hours of weekly report summarisation. None of it cost real money. The instinct was right. This was a competitive edge sitting on the table and he moved.

The second decision was the enterprise licence rollout -- AI embedded across the commercial team. Marketing was able to increase their output dramatically. Campaign turnaround was much faster. And that was where most leadership teams stopped evaluating and started scaling. But the third decision was headcount -- when people left and choosing not to backfill roles, but trying to make sure that tasks were reassigned and AI was used to fill the gaps. In a market where specialist commercial talent is already scarce, holding open headcount felt like the right call. Output held up, and the numbers looked good.

But the fourth decision was the one that created the structural problem. Approving the use of an AI tool that validates marketing materials against brand and regulatory requirements, embedded deeply inside the compliance workflow.

Now this is the decision that changed the nature of every decision before it. Now Henry had moved from using AI to assist people into using AI inside a process that is legally required to be documented, validated and auditable. That is a different category of commitment and he hadn't realised it. The fifth decision was the absence of one. He never asked what would happen when the tool changed. He assumed continuity because the output had been reliable.

The assumption is what the model update exposed.

The Infrastructure Threshold: When AI Stops Being a Productivity Tool

Speaker: Jasmine

Okay, so the pattern I keep coming back to is the marketing approvals workflow specifically. In life sciences tools companies, getting marketing material through regulatory review is a real bottleneck. A tool that cuts two weeks to three days is a competitive advantage. I can see why the call gets made.

What I don't think most leadership teams realise is that the moment AI enters a validated compliance process, it stops being a productivity tool. It becomes infrastructure. And infrastructure carries a completely different risk profile.

Speaker: Matt Wilkinson

That's the sentence the whole blog tries to land. Once you've automated the work and at the same time outsourced control of the work -- when both happen together, it feels like the same decision, but it really isn't. And you've got to be really careful in being able to distinguish what you can change and what you can't, and where the work is to go through a new process. Even if you were just looking at the front end of a process, just shifting from ChatGPT to Anthropic's Claude would be quite a big lift if you've already got lots and lots of automation, custom GPTs and a whole range of work that you're building an organisation on. One model update can break a lot of those things. And so really being clear about what's important to the business and what you have control over is really critical.

Is This a Life Sciences Problem or a Universal One?

Speaker: Jasmine

So is this actually a life sciences problem? Any company in any industry embedding AI in core workflows faces the same vendor dependency. What makes life sciences specifically more exposed?

Speaker: Matt Wilkinson

Two things. First, GXP. In a pharmaceutical or medical device adjacent business, any process that touches product claims, compliance, sign-off or regulatory submission lives inside a validated environment. Validation means documented, auditable and change controlled. When the AI tool inside the environment updates its model, the change is no longer just a product quality issue -- it's potentially a regulatory event. Switching costs aren't just technical, it's a revalidation programme that can pull you away from getting the work done. So no matter how clean the technical move would be, the revalidation is the problem.

The Control Layer: Architecture as the Answer

Speaker: Jasmine

So Henry's resolution is commissioning a control layer. I had to read that section twice. What is it in plain language and why is it the answer rather than just better vendor selection?

Speaker: Matt Wilkinson

So a control layer is a buffer between your business processes and the AI tools running underneath them. The simplest way to think about it -- instead of your tool talking directly to whatever model the vendor has chosen, everything runs through a middle layer that you own. That layer's job is to translate. Your process sends a request, the middle layer routes to whatever AI model is currently best for the job. The response comes back. From your process perspective, nothing has changed when the underlying model changes. The middle layer absorbs it. In practical terms, tools like Open Web UI sit in that role. Essentially, they're a control panel where it allows you to select models and create that buffer between the business process and the model itself.

Speaker: Jasmine

Let me put this in commercial terms because that's where the real implication sits for most CEOs listening. When you embed AI into your content approval workflow -- say an AI tool checking claims consistency or flagging regulatory language -- you're making a decision that looks like a productivity choice and is in fact a multi-year infrastructure commitment.

The control layer principle means asking before you embed it: if this tool changed its behaviour tomorrow, could I swap it out in a week? If the answer is no, you either build the control layer first, or you keep the process human-run until you can.

Separating Tools by Regulatory Exposure

Speaker: Matt Wilkinson

And the regulatory classification question is the other half of it. In the story I told, Henry's mistake was not separating his AI tools by their regulatory exposure. Some tools sit inside validated processes. Those carry GXP switching costs and need formal change control. Others are experimental -- marketing copy, content drafts, call prep. They carry only the switching cost of actually switching the human behaviour behind using them.

Treating them identically is how you end up with a big problem in a workflow that started life as a time-saving idea.

Speaker: Jasmine

So the practical map is this: every AI tool you use or are evaluating sits in one of two buckets -- validated process or experimental workflow. The governance, the vendor contract review, the control layer question -- all of that is determined by which bucket it sits in.

Henry's Resolution and the Honest Lesson

Speaker: Matt Wilkinson

Exactly, and the honest version of Henry's story is that he never drew that map. Nobody had told him he needed to.

So just to play the devil's advocate here, the control layer does add a level of complexity and cost to solve a problem that some may never face. But I think it's becoming increasingly clear that as we adopt AI more and more, and more organisations start looking at how and where they reap the benefits of AI, we have to be more and more careful about where we apply it.

And just look at what are the implications for things when the model changes, because the models are getting better and better. And many of the organisations are now deprecating some of those old models. And that's the risk. It's not just about being able to say, hey, we'll just carry on using ChatGPT 4, because you now can't access it as easily, even via an API.

So it's about being able to say, what happens here? And I think this is where for a lot of organisations, what's going to happen for some of those processes -- they're going to be choosing either open source models or using models that they have more control over, making sure that they exist.

Henry's story doesn't end badly and doesn't have to. He made sure that he commissioned that control layer and he made sure that he continued to reap those productivity gains. But what changed was the architecture and with it the answer to the one question that matters when the model changes: are you still in control of your business? And that's really important. You need to be able to make sure that when the model changes or if a vendor goes out of business, that you don't just have a big gap left in your business processes, but you've got something that you can easily switch through.

Closing: The Two-Column Test and the Human Insurance Policy

Speaker: Jasmine

So for a CEO of a life sciences tools company, the practical question is this: draw two columns -- validated process, experimental workflow. Put every AI tool you are currently using in one of them. And if anything in the validated column doesn't have a clear exit path -- a way to swap the underlying model without a six-month revalidation event -- that's your first call tomorrow morning.

The clarinet story is the other half of this, which we didn't spend time talking about. Even the company that became the poster child for AI replacing human roles had to quietly rebuild the human layer it thought it had eliminated. Quality became the binding constraint.

In this industry, quality is also a legal obligation. The humans who understood the process before AI ran it are not optional infrastructure. They're the insurance policy -- and don't let them quietly disappear.

Speaker: Matt Wilkinson

I couldn't agree more. Should you choose to read it, the links are in the show notes -- the full post, "Congratulations. Your AI Stack Is Now Infrastructure", is at strivenn.com. And if you're working through an AI governance question in your commercial team right now, where you recognise Henry in your own organisation, we'd love to hear about it.

Speaker: Jasmine

And if you're the CEO or marketing leader who just drew those two columns and found something in the wrong bucket, come and find us. That's exactly the conversation this podcast exists for. Again, fantastic conversation, Matt. Thanks so much.

Speaker: Matt Wilkinson

Thank you, Jasmine. Look forward to catching up again soon.

Speaker: Jasmine

Sounds good.

Q&A

My team is already using AI for content drafts and call prep. Do I need a control layer for that?

No -- not yet. Experimental workflows like content drafting and call prep sit in the low-risk bucket. The switching cost if the underlying model changes is behavioural, not regulatory. You retrain the habit, you do not revalidate a process. The control layer question only becomes urgent the moment AI touches something documented, auditable or change-controlled. Keep a clear inventory of which tools are in which bucket and review it quarterly as your stack grows.

How do I know whether a workflow counts as a validated process under GXP?

Ask one question: if this process produced an incorrect output, could it result in a regulatory finding, a product quality failure, or a compliance breach? If yes, it is validated process territory. Marketing claims review, regulatory language flagging, sign-off workflows -- these all qualify. Campaign copy drafts, meeting summaries, and internal briefing notes typically do not. If you are unsure, your quality or regulatory affairs lead will know within ten minutes of you asking them.

We are evaluating a new AI tool for our compliance review workflow. What should we ask the vendor before we sign?

Three questions matter most. First, what is your model update and deprecation policy, and how much notice will you give us before a model version is retired? Second, can we pin to a specific model version and for how long? Third, do you provide change logs at the model level, not just the platform level? If the vendor cannot answer all three clearly, that is signal. Build the control layer before you embed, or keep the process human-run until you have those answers in writing.

We already have AI embedded in our compliance workflow without a control layer. What should we do now?

Start with an audit, not a rebuild. Map every AI tool currently touching a validated process and document the model version, the vendor deprecation policy, and the estimated revalidation cost if the model changes without notice. That exercise will tell you which tools carry the most exposure. Prioritise building a control layer around the highest-risk workflow first -- typically the one with the longest revalidation cycle. You do not need to solve the whole stack at once. You need to neutralise the worst single point of failure.

We made headcount decisions based on AI productivity gains. What happens if a model update breaks those workflows?

This is the structural risk the episode is built around. If the humans who ran the process before AI replaced them are gone, the recovery time after a model change is not just technical -- it is also a knowledge and capacity problem. Before you make any further headcount decisions tied to AI-driven productivity, document the process at a level that would allow a new person to run it manually. That documentation is not a contingency plan. It is the minimum viable insurance policy for every validated workflow AI is currently running.

Topic: Podcast