S2: Ep 16 Falsification Logic and the Invisible Buyer
By Matt Wilkinson
Scientist buyers use falsification logic -- one weak claim destroys your whole case -- so claim selection and buyer presence beat validation volume every time.
Shownotes
Your claims list passed legal, survived MS3, and still didn't land. The problem wasn't your evidence - it was which claim was leading and whether your buyer was still in the room when you chose it.
This episode is for life science marketers, product managers, and commercial leaders building claims hierarchies for scientist buyers.
Jasmine and Matt unpack why scientist buyers apply falsification logic to commercial claims -- meaning one weak point invalidates everything before it -- while most commercial teams build on accumulation logic. They explore how organisational gravity edits the buyer out of decisions before anything reaches the field, and how a synthetic customer built from real voice-of-customer data can keep buyer presence active throughout the review process.
Key idea: Your buyer has already left the room before the claims list is written, edited out by organisational gravity -- and a synthetic customer keeps them present throughout.
- Why accumulation logic and falsification logic produce opposite commercial outcomes for the same claims list
- How a single weak claim destroys scientist buyer confidence regardless of how many strong ones precede it
- Why claim selection is a commercial judgement, not a validation problem
- How organisational gravity pulls messaging toward what is safest rather than what the buyer needs
- What a synthetic customer is, what it is built from, and what it cannot replace
- How to test whether your buyer has already left the room before your next review cycle
Keywords: life science marketing, scientist buyers, falsification logic, claims hierarchy, MS3 review, synthetic customer, organisational gravity, product marketing, voice of customer, commercial claims, accumulation logic, buyer presence
If this episode shifted how you think about your next review cycle, subscribe to A Splice of Life Science Marketing for new episodes every week.
The following is the full transcript of this episode of A Splice of Life Science Marketing. Matt Wilkinson and Jasmine discuss the commercial mismatch between how organisations build claims lists and how scientist buyers evaluate them -- and what a synthetic customer can do about it.
Opening: The buyer in the room
Speaker: Matt Wilkinson
When was the last time your buyer was actually in the room when you made a decision about them? Hi, I'm Matt.
Speaker: Jasmine
And I'm Jasmine. This podcast exists because that question matters more than most marketing teams realise. Let's get into it.
The LinkedIn message that started it
Speaker: Matt Wilkinson
Hi Jasmine. How you doing?
Speaker: Jasmine
Good, good. May the fourth be with you.
Speaker: Matt Wilkinson
And also with you. I want to start with both of us as readers before we get into the ideas. I came to your post and the thing that stopped me was the LinkedIn message you received after publishing it. Before we get into the debate, let's talk about it.
Speaker: Jasmine
Sure. So I sent a link to the blog to a senior product leader at a life sciences tools company. He has a science background and is now running product marketing. He said he normally reads messages like mine about a blog and sends a polite no thank you and moves on. This one he read end to end and forwarded it to his whole team.
Speaker: Matt Wilkinson
So why did he send this one on then?
Speaker: Jasmine
He said scientists are by far the hardest people to sell to. His team is currently building new marketing collateral, and they know from experience that one slightly off claim shuts down the opportunity, or at best slows it markedly. He has scientists on his team whose job it is to be the most critical voice in the room before anything goes out. He described them as poor marketers and PMs or product managers. And he said it with a laugh.
Speaker: Matt Wilkinson
That laugh is interesting because what he is describing is not unusual. It's just usually not spoken. The scientists inside your commercial organisation run the same check on your claims that your buyer will run externally. They are the internal proxy. And if that check is happening at the end of the process rather than the beginning, you are already set up.
Speaker: Jasmine
What struck me writing the post was that he's on both sides of the table, like you and I both are. He has been the scientist who stopped reading. Now he's the person trying to write something a scientist won't stop reading. That tension is what the blog is really about.
The story behind the post
Speaker: Matt Wilkinson
So what was the story behind the story? What made you write it?
Speaker: Jasmine
Yeah, a lot of lived experience here. I've been in many MS3 reviews where the claims list is long, technically validated, and still doesn't land in the field. And when we dug into why, the answer was never the quality of the evidence. It was always which claim was leading and whether it was chosen for the buyer or chosen because it survived legal. Those are different selection criteria and they produce very different outcomes.
Speaker: Matt Wilkinson
Couldn't agree more. Before we dive in any deeper, what's an MS3 review for those people that may not be familiar?
Speaker: Jasmine
Yeah, that stands for milestone three. It's part of the phase gate process in the new product development process. And that's where the specifications have already been locked in and verification is completed and you're heading into validation.
Accumulation logic versus falsification logic
Speaker: Matt Wilkinson
Thank you. So in the post you make a distinction between accumulation logic and falsification logic. Can you explain the difference and why do people default to different ones?
Speaker: Jasmine
Yeah, because scientist buyers are not applying the same logic to your claims list that you used to build it. Most commercial teams operate on accumulation logic, which is that the evidence stacks up. More proof points make a stronger case. That's accumulation logic. Each additional claim increases the probability that the buyer will be convinced. That's how you build a product deck, how lawyers build briefs, and how most MS3 reviews are structured. However, your scientist buyer is trained in something different. They're trained in falsification logic. This comes from Karl Popper's philosophy of science. A theory can never be proven, only disproven. A single disconfirming case destroys the whole thing, regardless of how many confirming cases came before it. Scientists apply that framework to their own research. They apply it to your application note without thinking about it. The asymmetry is what makes this commercially dangerous. Accumulation logic is symmetrical. Ten good claims plus one weak one still feels like a ten to one in your favour. Falsification logic, on the other hand, is asymmetric. One weak claim doesn't subtract from the list. It invalidates everything before it. The scientist doesn't think nine out of ten, not bad. They think if this one is wrong, what else did they not check? And the damage isn't proportional to how wrong the claim is. A sensitivity figure validated in an R and D buffer system rather than the buyer's actual sample matrix is a technically minor issue. But to a scientist, it signals something much larger about your rigour. Rigour is the whole game.
Validation versus selection
Speaker: Matt Wilkinson
So if the problem is really one weak claim invalidating all the others, is the fix just to focus on those that you can rigorously validate?
Speaker: Jasmine
Because more rigorous validation isn't the same as better selection. You can have ten fully validated claims and still lead with the wrong one -- the most impressive one rather than the most defensible one for the specific buyer reading it. Those are rarely the same thing. Selection is a commercial judgement about which claim sits deepest in your target buyer's specialty and is hardest to challenge from their bench experience. No amount of validation work makes that judgement for you.
I think there's a good transition here between what we just talked about in my blog and your blog. Your blog makes the case that the buyer disappears before, for example, the claims list is even written, edited out through review cycles before anything reaches a scientist. How does a synthetic customer actually fix that? What does it do that better customer data doesn't?
Where the buyer goes and what a synthetic customer does about it
Speaker: Matt Wilkinson
So the difference is really presence versus data. Customer data -- whether that's voice of customer studies, interview reports, buyer persona documents -- they exist, but they're usually stored away on a hard drive rusting away or they're filed in a desk drawer and never looked at. And then not in the room when copy review happens, when the brief is being retuned, when people are making decisions about which claim should we use? So messaging gets adjusted for what I like to call the gravity of the organisation -- essentially what's safest, what the organisation feels will go through the approval chain. It's not about people deciding to make messaging worse. It's just about everybody in the room not necessarily being laser focused on what the customer is. And also everybody having a slightly different view of what the customer might think. A synthetic customer can be built from real interview transcripts, reports from actual sales recordings, win loss reporting, all sorts of data that organisations are already capturing. And then once they go into a process, you can build these synthetic customers so that actually you can then query them and ask them their opinion. Now, is that a substitute for voice of customer and doing in-market testing? No. But it allows that buyer presence to be maintained throughout the decision-making process in between those stage gates and in between the points where you would normally go and speak to the customer. Hopefully it allows organisations to then be better informed when they have those next conversations with customers, reducing rework, reducing the number of edits and the number of times that campaigns miss the mark because the gravity of the organisation has pulled it away from what the buyer really needs to hear.
Speaker: Jasmine
So an admission. I've definitely been there when you say personas and PDFs that live in a drawer or on a rusted hard drive. Completely get that. So a synthetic customer is only as good as its inputs. If you built it from win data and conference booth conversations, which are all very positive, it will validate almost anything. So it's the typical garbage in, garbage out trap that we hear a bit.
Speaker: Matt Wilkinson
Well, in some ways, yes. I mean, the grounding has to look at the specific objections, specific questions that people are raising. And then it comes down to testing as well. If your synthetic customer never says no, then something is wrong with the build. But there is something that's really important to note here -- that it's amazing how much these will show you based on the information that's available out there. So we're not just looking at what a large language model will tell us, but actually once we start adding in voice of customer, adding in top of that information that exists within an organisation, what it allows us to do is to start looking at prioritisation in a completely different way. And that then allows us to really get under the skin a little bit of the data that we've already captured. So it's not about replacing voice of customer. It's about adding to it and synthesising it in a way that really gives us a sum that is more than the sum of its parts, because as humans we really struggle to bring all of that data together and bring it into something we can manage in one go. Whereas with the AI we can actually synthesise it to get all of that data into a format that we can then query and ask questions of and that will respond in a way that is like a customer.
So the buyer is not gone because your market got harder. The buyer is gone because your organisation made them invisible. Review by review, approval by approval, until what reached them had been edited down to its least interesting form. Before your next review cycle, can anyone in the room name one piece of buyer language from a real customer conversation that should be in the content you're reviewing? If nobody can, you know that your buyer has already left the room.
Closing: The commercial judgement that belongs to you
Speaker: Jasmine
And the accumulation logic feels like rigour. What it actually does is give a scientist buyer more places to find the one thing that breaks. The product manager who gets MS3 right is not the one who did the most validation work. It's the one who knew which piece of that work was undeniable to the specific person reading it. That's a commercial judgement. It belongs to you.
Speaker: Matt Wilkinson
Links to both of these posts -- "One Claim You Can Prove Beats Five You Cannot" and my post "Your Buyer Left the Building" -- can both be found at strivenn.com and will be in the show notes. The full conversation that I had with Yuri Belast on his podcast can be found at webdream.net.
Speaker: Jasmine
We answer every message. We'd love to hear from you. If you're working through a claims hierarchy or an MS3 review right now and something from today is useful or wrong, we'd love to hear from you.
Speaker: Matt Wilkinson
Absolutely. And until next time, Jasmine, it's been a pleasure.
Q&A
How do I know whether my claims list is being built on accumulation logic or falsification logic?
Ask one question at your next review: if a scientist found one claim they could challenge, would the rest of the list survive? If the answer is yes, you are operating on accumulation logic. If the answer is no, you are closer to how your buyer is actually reading it. Most commercial teams discover they have been stacking evidence rather than selecting the one claim that is hardest to break from the specific buyer's bench experience.
We have solid voice-of-customer data. What does a synthetic customer actually add that a good persona document doesn't?
Presence at the moment of decision. A persona document exists but is rarely in the room when copy gets reviewed or a brief gets retuned. A synthetic customer built from the same underlying data can be queried during review -- asked whether a claim would land, where it would break, what language would resonate. The data is the same. The difference is availability when organisational gravity is pulling decisions away from what the buyer needs.
What is the most common sign that organisational gravity has already edited the buyer out of our content?
Ask the room: can anyone name one piece of actual buyer language from a real customer conversation that should be in the content you are reviewing? If nobody can, the buyer is already gone. Messaging shaped by what will survive the approval chain rather than what a scientist would find undeniable is the most reliable indicator. The content becomes safer, flatter, and less specific -- the exact opposite of what falsification logic requires.
How should we think about which claim leads our application note or product deck for a scientist audience?
Lead with the claim that sits deepest in your target buyer's specialty and is hardest to challenge from their bench experience. That is rarely the most impressive claim. It is the most defensible one for the specific person reading it. Run it through your internal scientists not for validation but for adversarial review -- ask them where they would stop reading and why. The answer tells you whether your lead claim is a commercial judgement or a legal one.
If a synthetic customer is only as good as its inputs, what data sources actually make it useful rather than just validating what we already believe?
Ground it in objections and friction, not wins. Sales call recordings where deals slowed or stalled, loss reports, support queries, and scientist pushback from field teams give the model something to push back with. If your synthetic customer never says no or never identifies a claim it would challenge, the inputs are too positive. Conference booth conversations and upsell data will produce a model that validates almost anything. Loss data is where the signal is.