logo BOOK A GROWTH CONSULTATION
Your buyer left room

When to use
synthetic customers
- and when not to

But first, what is a Synthetic Customer anyway?

A synthetic customer is an AI model trained on your specific buyer data - interviews, VOC research, sales call transcripts, and competitive intelligence - that responds to questions, challenges messaging, and simulates buying decisions as your actual buyers would.

 

It is not a persona document. Not a targeting filter. Not an off-the-shelf AI simulation. It is a queryable, living representation of your specific buyer - built from your research, calibrated to your category, and available throughout the process.

Not this

Persona

Not this

ICP

Not this

Generic AI Persona

This

Synthetic Customer

What it is

A static description of your buyer type.

What it is

A firmographic filter for account targeting.

What it is

An AI simulation built from generic training data.

What it is

An AI model trained on your specific buyer research.

What it tells you

Who your buyer is - role, pain points, channels - captured at a point in time.

What it tells you

Which companies to pursue - size, sector, stage, spend.

What it tells you

What a generic B2B buyer might think - based on the internet, not your market.

What it tells you

What your actual buyers will say when they read your draft - with competitor proposals already in their inbox.

When it's available

At the start of the process. Consulted at the brief, filed after it.

When it's available

At the targeting stage. Stops at the account level.

When it's available

On demand. Fast. Indistinguishable from your competitor's version.

When it's available

Before the brief. Inside the brief. After legal. Before the pitch. Whenever the buyer would have left the room.

The gap

Does not survive the approval process. By the time the campaign ships, the buyer it describes has left the room.

The gap

Says nothing about how buyers inside those accounts actually think, object, or decide.

The gap

Built from the same data as everyone else's. Cannot surface the category-specific objections that live inside your buyer's actual experience.

The gap

None. This is the goal.


The difference is not academic. A persona tells you who your buyer is. An ICP tells you which companies to target. A generic AI persona tells you what any B2B buyer might think. A synthetic customer tells you what your buyer will say when they read your specific draft - with three competitor proposals already in their inbox.

When to use synthetic customers - and when not to

We mapped 13 commercial tasks by synthetic confidence, grounding requirement, and risk. Based on research from Stanford, Harvard Business School, LMU Munich, and Vanderbilt.

 

Synthetic for directional Human for decisional Grounding raises the ceiling

 

Task Ungrounded — generic LLM PersonaAI grounded Human research required
🔬 Research and Insight
Hypothesis generation ○ Directional only

LLM-generated assumptions about buyer problems.

Best use: Generate a long list of hypotheses to pressure-test.
Limitation: Outputs reflect training data averages. May miss niche life science pain points.
~ Moderate confidence

Hypotheses anchored in real buyer language and observed patterns.

Grounding adds: Personas built from VOC data and transcripts surface the specific framing real buyers use.
Inputs: VOC survey data + interview transcripts
Validate all outputs before acting
Survey pre-testing ~ Moderate confidence

Checks question logic, flow, and answer completeness.

Best use: Stress-test survey design before fielding.
Limitation: May not flag sector-specific terminology gaps or life science workflow context.
✓ High confidence

Sector-accurate review using real buyer vocabulary and context.

Grounding adds: Grounded personas flag questions that use vendor language instead of buyer language.
Inputs: LinkedIn profiles + role context
Optional — grounded pre-test reduces but does not eliminate need
New product development research
Testing
✕ Low confidence

Generic concept reactions from LLM average buyer.

Best use: First-pass concept screening only.
Limitation: Ungrounded LLMs approve more concepts than real buyers. Workflow friction and procurement objections are consistently underweighted.
~ Moderate confidence

Concept reactions grounded in real workflow, procurement, and validation context.

Grounding adds: PersonaAI-grounded personas surface objections that kill new product adoption. Testing now with a global life science tools company.
Inputs: Interview transcripts + LinkedIn profiles + VOC data
Required at every stage gate before commercial commitment
Competitive intelligence ✕ Low confidence

Comparison of alternatives based on LLM training data.

Best use: Baseline landscape mapping only.
Limitation: Cannot reflect recent product releases, regulatory approvals, or pricing shifts.
~ Moderate confidence

Buyer comparison logic grounded in real evaluation criteria.

Grounding adds: Grounded personas model how your specific buyer type compares alternatives. Cannot replace live competitive tracking.
Inputs: Win/loss data + VOC data
Required — live competitive tracking cannot be synthesised
✍ Content and Messaging
Content brief development ~ Moderate confidence

Topic angles and structure based on assumed buyer interests.

Limitation: Briefs reflect what the LLM thinks buyers care about — not what they actually say in interviews or search for.
✓ High confidence

Briefs built from real buyer vocabulary, pain framing, and information needs.

Grounding adds: VOC and transcript grounding ensures content addresses the specific questions real buyers ask.
Inputs: Interview transcripts + VOC survey data
Validate angle and tone against recent market context
Content creation ~ Moderate confidence

On-brand drafts in assumed buyer voice and register.

Limitation: LLM drafts default to generic B2B register. Life science-specific vocabulary used inconsistently.
✓ High confidence

Drafts using real buyer vocabulary, pain framing, and objection language.

Grounding adds: Atlas brand AI trained on Strivenn positioning, personas, and voice frameworks.
Inputs: Interview transcripts + VOC data + Atlas brand training
Review before publishing — grounding raises quality, human judgement closes the gap
Message testing ✕ Low confidence

Generic message reactions from LLM average buyer.

Limitation: Ungrounded personas approve too many messages. Cannot reliably distinguish resonance from plausibility.
✓ High confidence

Message reactions grounded in real buyer priorities, objections, and decision criteria.

Grounding adds: PersonaAI grounding substantially reduces sycophancy. Grounded personas challenge weak messages using real buyer logic.
Inputs: Interview transcripts + VOC survey data + LinkedIn profiles
Validate shortlisted messages with a small real buyer cohort before full deployment
🤝 Sales and Commercial
Sales objection handling ~ Moderate confidence

Generic objection library based on assumed buyer concerns.

Limitation: Without win/loss grounding, synthetic objections miss deal-specific blockers: procurement timelines, validation requirements, incumbent inertia.
✓ High confidence

Objection library built from real win/loss patterns and buyer interview data.

Grounding adds: Surfaces the objections your sales team actually encounters — not generic B2B objections.
Inputs: Win/loss interview themes + CRM objection data
Sales team review to validate objection accuracy before deployment
Personality-informed pitch practice ✕ Low confidence

Generic stakeholder role play based on job title and sector.

Limitation: Without real individual data, the simulation reflects the LLM's average senior buyer — not the specific person you are meeting.
✓ High confidence

Stakeholder simulation built from real individual's digital footprint.

Grounding adds: Tools like Humantic.ai ground the persona in the actual buyer's LinkedIn profile. The synthetic element is only the conversation — the person is real.
Inputs: LinkedIn profile + Humantic.ai behavioural signals
Real meeting replaces simulation — this is preparation, not a substitute
🎯 Strategy and Planning
ICP definition and segmentation ✕ Low confidence

Segment profiles based on LLM assumptions about buyer types.

Limitation: Synthetic segments miss niche buyers and outliers — precisely the segments most valuable in specialised life science markets.
~ Moderate confidence

ICP profiles supplemented with synthetic elaboration of real customer patterns.

Grounding adds: CRM data and win/loss analysis anchors ICP definition in real buying patterns. Synthetic extends — it does not define.
Inputs: CRM win/loss data + customer interview themes
Required — ICP cannot be built on synthetic data alone
ABM account planning ✕ Low confidence

Generic account hypotheses based on company size and sector.

Limitation: ABM executes against specific named individuals. Ungrounded synthetic cannot replicate relationship history, recent activity, or organisational context.
~ Moderate confidence

Account planning hypotheses supplemented with stakeholder personality and communication insights.

Grounding adds: CRM history + Humantic.ai stakeholder profiling grounds account plans in real relationship context.
Inputs: CRM account data + LinkedIn Sales Navigator + Humantic.ai
Required — real account intelligence is irreplaceable
⚠ High-Stakes Decisions
Pricing research ✕ Avoid

Willingness-to-pay estimates with no budget, procurement, or spend-justification context.

Why: Synthetic buyers have no budgets, no procurement process, and no experience justifying spend. The output is structurally optimistic. Mispricing is commercially damaging and slow to reverse.
✕ Avoid — grounding does not help here

Grounding cannot resolve the structural absence of real budget constraints and procurement dynamics.

Use instead: Real willingness-to-pay interviews, conjoint studies, or transactional data
Essential — no synthetic substitute exists for pricing research
Market sizing and forecasting ✕ Avoid

Market estimates based on LLM training data with unknown cutoff.

Why: LLMs cannot know about NIH funding cuts, recent M&A activity, or diagnostic reimbursement changes that materially affect life science market size.
✕ Avoid — grounding does not help here

Grounding improves qualitative framing but cannot supply quantitative market data.

Use instead: Primary market research + industry databases + real buyer interviews
Essential — quantitative market data cannot be synthesised

Grounded in research from Stanford HAI, Harvard Business School, LMU Munich, and Vanderbilt University. PersonaAI grounding uses real customer interview transcripts, LinkedIn profiles, and VOC survey data. NPD client application anonymised. 

Apply this with your team →
Talk to us about where PersonaAI fits in your workflow