S2: Ep 14 Walking in the customers shoes
By Matt Wilkinson
Persona AI keeps the customer present through organisational gravity - the internal approval process that dilutes buyer insight before launch.
Shownotes
You ran the interviews. You built the personas. The campaign went to market and the open rates collapsed. The research was not the problem - the buyer left the room somewhere between the first draft and the final approval.
This episode is for life science marketers who have watched good insight get softened by legal, reshaped by product, and diluted by a VP's instinct - and are looking for a structural fix that keeps the customer present throughout the entire commercial process.
Matt Wilkinson and Jasmine debate the organisational gravity problem: the predictable force that pulls marketing content away from buyer language and toward internal consensus. Matt argues that grounded Persona AI - a synthetic customer built from real voice-of-customer data - changes the evidence dynamic at every stage of review, not just at briefing. Jasmine pressure-tests whether visibility alone is enough to change organisational behaviour, and where the solution has genuine limits.
The key idea: AI can not only help you create better personas, it can help you keep those insights with you as you traverse the internal politics of approval - the process Matt calls organisational gravity.
What you will learn:
- Why organisational gravity turns customer documents into consensus documents - and what to do about it
- How a synthetic customer creates evidence at every stage of review, not just at briefing
- The difference between making misalignment visible and having the infrastructure to act on it
- What the minimum viable input set looks like for a grounded Persona AI - and why it is lower than most teams assume
- How to use Persona AI as a supplement to real buyer conversations, not a substitute for them
- Why a shared synthetic customer surfaces sales and marketing tension rather than papering over it
If this episode resonated, subscribe and leave a review - it helps more life science marketers find the conversation. Read the original blog post that sparked this debate: Walking in the Customer's Shoes.
For more info, explore PersonaAI and Strivenn's synthetic customer hub.
Persona AI keeps the customer present through organisational gravity - the internal approval process that dilutes buyer insight before launch.
Shownotes
You did the interviews. You built the personas. The campaign went to market and the open rates collapsed. The research was not the problem - the buyer left the room somewhere between the first draft and the final approval.
This episode is for life science marketers who have watched good insight get softened by legal, reshaped by product, and diluted by a VP's instinct - and are looking for a structural fix that keeps the customer present throughout the entire commercial process.
Matt Wilkinson and Jasmine debate the organisational gravity problem: the predictable force that pulls marketing content away from buyer language and toward internal consensus. Matt argues that grounded Persona AI - a synthetic customer built from real voice-of-customer data - changes the evidence dynamic at every stage of review, not just at briefing. Jasmine pressure-tests whether visibility alone is enough to change organisational behaviour, and where the solution has genuine limits.
The key idea: AI can not only help you create better personas, it can help you keep those insights with you as you traverse the internal politics of approval - the process Matt calls organisational gravity.
What you will learn:
- Why organisational gravity turns customer documents into consensus documents - and what to do about it
- How a synthetic customer creates evidence at every stage of review, not just at briefing
- The difference between making misalignment visible and having the infrastructure to act on it
- What the minimum viable input set looks like for a grounded Persona AI - and why it is lower than most teams assume
- How to use Persona AI as a supplement to real buyer conversations, not a substitute for them
- Why a shared synthetic customer surfaces sales and marketing tension rather than papering over it
Keywords: persona AI, life science marketing, organisational gravity, synthetic customer, voice of customer, B2B content strategy, marketing and sales alignment, buyer personas, life science commercialisation, content approval process, customer centricity, AI in marketing
Read the original blog post: Walking in the Customer's Shoes. Subscribe and explore more at strivenn.com.
Transcript
In this episode, Matt Wilkinson and Jasmine debate the organisational gravity problem in life science marketing - why good buyer research rarely survives intact to market - and whether Persona AI is the right structural fix for keeping customer insight present through every stage of the commercial process.
Setting the scene - organisational gravity and the Frankenstein model
Speaker: Jasmine
And here we are again, Matt, how are you?
Speaker: Matt Wilkinson
I'm good, good to see you again.
Speaker: Jasmine
You as well. I am so excited for this podcast because I think this is my all-time favorite of your blogs. So shall we get going?
Speaker: Matt Wilkinson
Thank you. Thank you.
Speaker: Jasmine
All right, so 15 buyer interviews, they're transcribed, they're coded, they're built into real personas, the campaigns go to market, and then the open rates, they plummet. They're 7%, your normal is 35%, you're left wondering, what the heck just happened? The research wasn't the problem. The buyer left the room somewhere between the first draft and the final approval. What a visual that was. Legal softened the headline. The product team added two features. The VP asked for the heritage reference. Each decision individually defensible. Together, they produced a consensus document, not a customer document in what I used to call the Frankenstein model. Today, I'm debating with you on the organizational gravity problem in life sciences marketing content, and on whether persona AI is the right structural fix. You argue that keeping a synthetic buyer present throughout the process changes the evidence dynamic in review meetings. I think the problem is real and the direction is right. And I'm going to push on where the solution has its limits. Ready?
Speaker: Matt Wilkinson
I'm looking forward to it. A heads up, there definitely are some limits out there, but I think there are far more benefits to this approach than most people realize.
Is the problem about evidence or internal political skill?
Speaker: Jasmine
All right, let's see where we land. So the story blames organizational gravity for why good research doesn't survive to market. But the Allison persona lost those arguments because she was arguing from instinct against process. Legal had a process, the VP had a process, and she had gut. Is the lesson that life science marketers need better evidence in review meetings or they need better internal political skills to protect the insight they already have?
Speaker: Matt Wilkinson
So I think it's a great question, but I don't think it's just about political skill and evidence. They're not necessarily alternatives. They're sequential. You can't win a campaign review through political skill alone. Very often these processes go into a black box. They go into a system, they run through, and you're not part of them. And even if they are, going and pushing back becomes a fight. As you said, every single change on its own was defensible, but in the end, you've created a Frankenstein that no longer speaks to the customer. Even though the whole point of the piece is to speak to the customer. What I'm arguing for is a way of keeping your buyer in that loop and being able to create evidence at every stage as to why a decision is good or not. And if a change needs to be made, they do need to be made. How can we do that in a way that doesn't lose the customer as part of that change, but rather is something that keeps them in that loop? So I think it's really important. And a lot of the work that you and I have been doing has been about building these synthetic customers and trying to turn them into a way that captures everything about the context of the company and the products in a way that allows us to then speak to the customer in the words that they use, answering their problems and keeping them - and actually enabling them to be a tool that not only marketing can use and generate data to push back on, but at each stage they can be used by regulatory, by legal to actually be a bit of a sparring partner. This is a change I need to make. How can I do so in a way that doesn't lose the customer?
Speaker: Jasmine
So the political reality in most life science organizations is that senior stakeholders' instinct about brand outweighs marketing's evidence about buyer language more often than the framework implies. Persona AI makes Alison's position more defensible. It's not making her position more powerful. I'm not arguing against evidence. I'm arguing that the framework may be overselling how much evidence changes the review meeting outcome versus how much it implies making the lost argument more documented.
Speaker: Matt Wilkinson
Yeah. So I think it's an interesting point, but I don't think it's about winning the arguments. I think it's about shifting the narrative and shifting the tools that are used. So it's not just about saying we need to include X, Y, and Z, and then for the marketer to be pushing back. These tools are tools that we can give to each of the people in that process. And if we want to be customer centric, why not keep our synthetic customer part of that process at every stage? So as they're going through that review, making sure that any changes that they are actually suggesting don't destroy the very core of what is being, by what you're trying to achieve. And that's the argument that I'm really trying to make here. It's about going from gut instinct and sort of trying to step into your customer's shoes versus the rest of the process not really even knowing much about the customer's shoes or the process that you're trying to go through, but really trying to make sure that that synthetic customer is there throughout the entire process.
The accuracy problem - confident objections versus current objections
Speaker: Jasmine
Yeah. So it strikes me that a lot of organizations almost pay lip service to saying they're customer centric and synthetic customers actually are a way to honestly be customer centric. So I want to pull on that thread a little bit more. You say grounded synthetic personas surface three to five objections the generic LLMs miss entirely. But a persona AI trained on historical interview data can't know about a competitor's recent clinical publication or a procurement freeze driven by an economic downturn or a new regulatory guidance that changes the buyer's risk tolerance. Does persona AI give you accurate objections or confident objections that were accurate months ago?
Speaker: Matt Wilkinson
The answer is both. And let's start by looking at what a Persona AI is. It's a form of synthetic customer. And there's a lot of academic research that looks at asking the large language model to take on the persona of a specific type of person. These work pretty well. And you can use the data in the large language model itself to be an approximation of your customer. The argument that I've been making, and that has shown up in the work that we've been doing so far, is that by feeding the large language model a set of context about voice of customer, about what's being said online, brand sentiment about you and your competitors, building up a really detailed picture of a narrow customer persona, if you will, allows us to have a much better picture of who we're actually selling to. Now that gives us a really good idea of a type of customer. Is it perfect? Is it going to be able to know what's going on? Absolutely not. And I think it's a case of really asking the question. How far can we push this? But I would argue that having this synthetic customer as part of your process is a huge leap forward than having a beautiful PDF that's been designed and been delivered and is maybe looked at by marketing, but more often than not is stuck on a hard drive rusting away and never ever used. So I'm arguing that by building these synthetic customers, we can change the way that we think about customer centricity and we can actually bring the customer with us through the entire commercial process.
Speaker: Jasmine
So the leap can go too far, meaning marketers who are relying on persona AI to validate their copy may be less likely to go back to real buyers for context, not more likely, because the synthetic customer is providing a form of validation that lulls you into complacency and confidence. The risk isn't that Persona AI gives wrong answers. The risk is that it reduces the urgency of asking real buyers questions. The more confident the synthetic customer feedback, the more it functions as a substitute for buyer conversations rather than a supplement to them. The refresh cadence you describe is the right answer, but refresh cadence requires discipline that tends to erode under campaign deadlines. And the operational reality is that the 18-month-old persona will still be running when the market has moved because the campaign deadline didn't leave time for a refresh cycle.
Speaker: Matt Wilkinson
So I think there are a few things to sort of clear up here. The operational risk is absolutely real. And it's not just that the answers may be wrong. The large language models are so convincing that those wrong answers will be delivered convincingly. So that's something we have to be very, very aware of. And that means that we do need to make sure we're building governance around how we use these. I would also challenge that the process of building these completely shifts the narrative around how we build personas. Yes, I absolutely agree that we still need to be able to get everybody in a sales and marketing alignment. We really need to still have that part of the process, but I don't think that we need to be spending the same time sat in workshops getting people to talk and fill out one line answers to a series of questions that more or less they're picking up off of a single LinkedIn profile. What this does is it allows us to build a really, really deep, rich data set about each of our customers or each of our customer personas. And once we've done that, the AI tools allow us to interrogate that as if they're real humans - essentially digital twins. And that then allows us to have a much richer experience. As things change, we can easily update the context documents, add to them, bring in extra information. And we've discussed recently how using AI to perform thematic analysis is able to do that. These tools allow us to update things much, much quicker than we ever could before. And the way that agents work these days, we're even able to automate some of these changes. I don't think that we're at the early stages of seeing how these synthetic customers can be used to approximate our real customers, but I'm convinced that this is the way forward.
Sales and marketing alignment - surfacing tension versus papering over it
Speaker: Jasmine
So you've also said that sales teams use the same synthetic customer to pressure test pitches and prepare for discovery calls. Sales and marketing have different relationships with buyer objections. Marketing wants to understand the buyer's world. Sales wants to handle the buyer's objections. Doesn't a shared synthetic customer paper over a tension that actually needs to be surfaced?
Speaker: Matt Wilkinson
It surfaces the tension rather than resolving it. If marketing's persona AI identifies that the buyer's primary concern is workflow confidence, and the sales team's prepared pitch leads with platform performance, that's a documented misalignment. So the synthetic customer makes it visible in a way that separate buyer research documents rarely do because both teams are asking the same persona the same questions. The tension between understanding the buyer's world and handling the buyer's objections - I don't think it's really a failure of sales marketing alignment. It's a predictable difference in function that becomes commercially costly when the two versions of the buyer never meet. A shared persona AI creates the meeting point without requiring a cross-functional meeting to discuss it. And if we're doing our job right, we're building the context for these. We're capturing all of that in the right way. What the product manager does with that visibility is a leadership decision. But the PM who can show a documented gap between the buyer's expressed concerns and the sales pitch language has something concrete to bring to a QBR. That's far more traceable than the abstract claim that sales and marketing are misaligned.
Speaker: Jasmine
So making misalignment visible is valuable if the organization is structured to act on it. In most life science tools companies, marketing and sales have separate reporting lines, separate planning cycles and separate incentive structures. The product manager who identifies a sales marketing message gap through persona AI has a finding that requires organizational collaboration to resolve. If the infrastructure for that collaboration doesn't exist, the finding lives in a presentation and doesn't change anything. The shared synthetic customer creates a shared evidence base. It doesn't create the cross-functional forum in which the evidence can be acted on. If the product manager takes the gap finding to a QBR where the VP of sales and the head of marketing are both present, and neither has authority over the other's team messaging, the finding produces acknowledgement rather than action. The framework is right that the shared persona is valuable. It may be overestimating how much visibility alone changes the organizational behavior.
Speaker: Matt Wilkinson
Yeah, visibility and diagnosis are prerequisites for change, not the mechanism for change. Somebody who brings a documented gap to a QBR has not solved the organizational structure problem. They've created the evidence that makes the organizational structure problem visible for people who can solve it. That's far more useful than the alternative of knowing it exists, but not having any evidence. I think there's also a slightly different challenge here as well. We know that AI is fundamentally changing roles and changing the way that organizations need to operate. My argument is as AI starts to transform how organizations operate, we need to be thinking about how can we use AI to bring the customer into every conversation that we have. This is my suggestion for how we do that - and bringing the customer into being able to query a synthetic customer in a meeting about everything from messaging through to sales or even in the innovation phase. I think that's a really exciting prospect.
Minimum viable input and the substitution risk
Speaker: Jasmine
So without a doubt, there's a lot of value in that without having to sort of book a meeting with a customer each and every time you have a question. And in no way am I saying that a synthetic customer replaces the need for meeting customers face to face, but you don't have to have a meeting for absolutely every question you have. But building a grounded persona AI requires interview transcripts, voice of customer data, LinkedIn profiles of target buyer types, and even more data. Not every life science marketing team has all that data or has it in a form that's usable. At what point is the investment in gathering the input data better spent getting back to actual buyers directly? What's the minimum viable input set that makes the synthetic customer more useful than the real interviews it's built on?
Speaker: Matt Wilkinson
I think that the comparison of real customer interviews versus a synthetic customer build is probably the wrong frame. The minimum viable input to build a persona, a synthetic customer like this, is far lower than actually going through the traditional agency process of building a persona. Yes, we have to have alignment on who our personas are. And that's key. But beyond that, anybody can look up LinkedIn profiles of the types of people that fill the roles within our ICPs. So that's not difficult. From there, we can go and get voice of customer through deep research if we have to. Even if they're not talking about our specific products, we can at least find out about brand sentiment across the category, about the questions that people are asking. These tools exist. We can do that already. If we've got voice of customer, that's fantastic. Let's layer it on. If we've got calls from existing customers from support or from sales calls, all of that information, if it's logged in a CRM somewhere - and let's hope that they are these days - that data is powerful and we can pull it and we can add it to it. The more data we have, the better, but I would argue that even if you don't have much internal data yet because it's a new product in a new category and maybe you haven't got the customers to even go out and buy it - maybe you're at a very, very early stage - you can, if you're collating that data in the right way, build these synthetic customers as a thin synthetic customer and then add to the context as you build it as you get it. So I don't think it's a competition between the two. It's more of an additive process as you get more data added to the Persona AI.
Speaker: Jasmine
So the discipline of keeping buyer conversations active is itself extremely valuable, independent of any tool, and extremely important. A team that's doing buyer interviews every 18 months because they have persona AI between cycles may be doing fewer buyer interviews than a team that has no synthetic customer but treats regular buyer contact as a core operating discipline. If Persona AI reduces the frequency of real buyer conversations, it may be providing a substitution effect that's commercially worse than the problem it's solving. What's the right cadence of real buyer interviews alongside Persona AI to ensure the tool is supplementing buyer contact rather than replacing it?
Speaker: Matt Wilkinson
Yeah. So the supplementation versus substitute risk is an important design constraint in the operating model and it's right to name it explicitly. The correct cadence is that customers should be talked to at the right times. And I would never say that we should supplant those. What I'm really advocating for is building synthetic customers and building these Persona AIs that are there to be present between every touch point that you would have had already. That way we can keep the customer in the conversation the whole way through the process. And we can use the voice of customer rather than it being some quotes pulled into a pretty PDF or a pretty PowerPoint presentation that we go back to and have to perform mental gymnastics around trying to stand in our buyer's shoes. What we're actually able to do is to have a digital twin of all of that information that we can query between each of those touch points. So I would always advocate for the most customer contact that you can get and for bringing as much of that data - as you're allowed to bring back and that they agree to bring back - into the organization and putting it into these Persona AIs. I don't think that changes. One of the things that I'm really passionate about is that between those blocks, however long they may be, those customers - that customer data is far more actionable inside of the Persona AI than it would be if it's just sat in a static PDF or a PowerPoint.
Closing - the paper in the drawer versus the active synthetic customer
Speaker: Jasmine
Yeah, there's no disagreement there. Having an active, synthetic customer that's in many of your marketing campaign meetings always beats the paper in the drawer.
Speaker: Matt Wilkinson
Sure. And I think that's why I'm so passionate about it. As soon as you can see the feedback that you get from rating a particular piece of text based from three different persona perspectives, and all of a sudden realize that even if it's not perfect, it's better than what most of us can do on our own, because the mental gymnastics of jumping from one set of shoes to another set of shoes to another set of shoes is so difficult - it just makes life so much easier. We can start using that as a hypothesis. How can we get better? Of course, we're always going to want to finesse the answers. We have to sanity check it. I'm not advocating for taking the human out of the loop at all. But this gives us a perspective that otherwise is really hard to keep within the process. And that's what I'm really advocating for.
Speaker: Jasmine
Yeah, so I advocate that folks who are listening should go to your most recent blog and read the full post, as well as go to our services page that's dedicated to Persona AI to see what a grounded synthetic customer built from real data looks like in practice. Read the original article here: Walking in the Customer's Shoes. So again, thanks. This was a superb blog post, Matt, and I look forward to our next discussion.
Speaker: Matt Wilkinson
So do I. All right, see you soon. Thank you so much. Bye.
Speaker: Jasmine
Bye for now.
Q&A
My content keeps getting softened in review. How do I use Persona AI to push back without it becoming a political fight?
Stop framing it as a fight and start framing it as a shared test. Before your next review meeting, run the draft and the proposed changes through your synthetic customer and document the delta - which version scores higher on the buyer's stated priorities, and why. Bring that output to the meeting as evidence, not opinion. You are not arguing your instinct against their process. You are showing what the customer would say about both versions. That shifts the burden of proof.
We do not have a lot of voice-of-customer data. Is it worth building a Persona AI with what we have?
Yes, and the bar is lower than most teams assume. Start with LinkedIn profiles of your ICP roles, publicly available category conversation data, and any CRM call notes you can access. That gives you a thin but functional synthetic customer. The key discipline is treating it as a living context document - add interview data, support call themes, and competitor mentions as you gather them. A thin persona AI used consistently outperforms a perfect PDF that nobody queries.
How do we stop Persona AI from becoming a comfort blanket that reduces how often we talk to real buyers?
Build the refresh cadence into the operating model before you launch the tool, not after. Set a fixed interval - quarterly is a reasonable starting point for most life science tools companies - for real buyer interviews, and treat those outputs as mandatory context updates for the persona AI. Frame the synthetic customer explicitly as a between-interviews tool. If your team starts citing persona AI output as a reason to skip a buyer call, that is the signal to enforce the cadence, not to abandon the tool.
Our sales and marketing teams have separate planning cycles. How does a shared synthetic customer actually help when the organisational structure is the real problem?
It does not fix the structure - and it is not designed to. What it does is make the misalignment concrete and traceable. When your sales team's pitch language and your marketing messaging diverge, a shared persona AI produces a documented gap rather than an abstract complaint. That gap is something a VP of sales and a head of marketing can look at in a QBR and make a decision about. Visibility is the prerequisite for structural change, not a substitute for it.
How do we govern Persona AI outputs so teams do not treat confident-sounding answers as ground truth?
Set explicit rules at point of deployment. Label all persona AI outputs as directional, not definitive. Require that any persona AI finding used in a campaign brief or a sales pitch is tagged with the date the context was last updated. Build a simple quarterly review into the persona AI operating model where outputs are sanity-checked against recent buyer conversations. The risk is not wrong answers - it is convincing wrong answers. Governance is what separates a strategic tool from a confidence generator.