In Homer's Odyssey, Odysseus faces the Sirens, creatures whose enchanting songs promise sailors exactly what they want to hear, luring ships to crash upon hidden rocks. Odysseus, forewarned of the danger, orders his crew to plug their ears with wax while he has himself bound to the mast so he can hear the song without being able to steer the ship toward destruction.
As Product Managers (PMs) in the RUO life science tools sector, we face our own Sirens every day. They appear in the form of enthusiastic researchers at conferences, vocal key opinion leaders, and detailed feature requests in customer surveys. Their songs are compelling: "We need sub-femtomolar sensitivity!" "It must have at least 10 multiplexing channels!" "The software needs AI-powered analysis!"
Like Odysseus, we need to hear these voices, they contain valuable signals about unmet needs. But unlike Odysseus, we can't simply tie ourselves to the mast and resist. Our job is more nuanced: we must listen carefully, interpret wisely, and translate what we hear into requirements that guide our R&D teams toward building products that researchers will actually use and buy.
The shipwrecks I've witnessed in my career? Products built exactly to the specifications researchers requested, launched on time and on budget, that nobody wanted to purchase. Why? Because we confused the song for the truth.
Let me share a story that still makes me wince.
A few years ago, someone I know was leading development of a new cell isolation system. During the VOC phase, 30 immunology researchers were interviewed. The message was unanimous and emphatic: they needed automated processing to handle 96 samples simultaneously with throughput of 96 samples in under 2 hours.
We dutifully documented this as a user requirement. R&D developed functional requirements around a complex robotic system with parallel processing. Eighteen months and $2.3M later, we had a prototype that met every specification.
The problem? When we brought it to beta sites, researchers balked at the $180K price tag. During follow-up interviews, we discovered what they actually needed: to leave work by 6 PM instead of 9 PM. They were processing 96 samples because that's how many they could squeeze into their evening, not because they had 96 samples that needed to be processed simultaneously.
We could have solved their real need, getting home for dinner, with a $45K benchtop system that processed 12 samples in parallel while they were in meetings, then automatically started the next batch. Total daily throughput: 96 samples. Time saved: 3 hours. Price point: within budget.
The lesson: What researchers say they want is often a solution they've already imagined. The Product Manager's job is to uncover the underlying need, the problem they're trying to solve, before we ever write a requirement.
Here's the framework I now use to translate VOC through to functional requirements:
Layer 1: VOC (Voice of Customer) → What researchers "say" in interviews, surveys, and conversations
Layer 2: User Needs → What they "actually need" (interpreted and validated through the noise)
Layer 3: User Requirements (URs) → What the product must enable users to accomplish, in addition to product nice to-haves and not necessary (written by Product Management)
Layer 4: Functional Requirements (FRs) → What the product/system "must technically deliver" to meet URs (written by R&D based on URs)
The magic and the risk lives in the transitions between these layers.
Most PMs stop at two or three "whys." Push deeper.
User Need Interpretation: Researcher needs to identify intervention opportunities in the pre-symptomatic phase of disease.
Now you understand that sensitivity is important, but so is speed (early detection), specificity (confidence in low signals), and potentially multiplexing (multiple markers for pre-symptomatic signature). This opens up solution possibilities beyond just raw sensitivity.
When researchers "hire" your product, what job are they really hiring it to do?
User Need Interpretation: Researcher needs to extract maximum information from minimal sample volume.
This reframes the problem entirely. Maybe you need better sensitivity per channel so they can subdivide one sample into four tubes instead of eight, not necessarily 20 colors.
The gap between what researchers say they do and what they actually do is vast. I once watched a researcher who told me she "needed faster PCR" spend 45 minutes manually entering sample IDs into software before starting a 90-minute PCR run. The PCR wasn't her bottleneck, data entry was.
Technique: Spend time in the lab. Watch the entire workflow, not just the step where your product fits.
"What would have to be true for you to switch from your current solution?"
This question reveals what they value versus what they complain about. Complaints are easy. Changing behaviour is hard.
The loudest researcher is often not the most representative.
Segment your VOC by:
A feature that's critical for 15% of your market but irrelevant to the other 85% might not belong in version 1.0.
This is where Product Management earns its keep. User requirements are your deliverable to R&D. They must be:
Consider a format like this:
"User must be able to [action verb] [object] [performance criteria] [context/constraint]"
VOC Quote from Interview:
"These western blots take forever, and I can never get clean bands. I need better antibodies that just work."
Interpreted User Need:
Reduce time, cost, and variability in protein detection workflows.
User Requirements (Written by PM):
UR-101: User must be able to complete the entire protein detection workflow from sample loading to publication-ready image in ≤3 hours hands-on time.
Success criteria: Time study with 5 users shows mean completion time of 3 hours ± 30 minutes
UR-102: User must be able to obtain reproducible quantitative results with inter-replicate CV ≤15% without optimisation.
Success criteria: Three naive users follow protocol as written and achieve CV ≤15% on first attempt
Corresponding Functional Requirements (Written by R&D):
FR-201: Primary antibody affinity constant (KD) shall be ≤1 nM for target protein as measured by SPR.
Rationale: Higher affinity reduces sensitivity to pipetting variation and sample loading differences
FR-202: Blocking buffer shall reduce non-specific antibody binding to ≤10% of specific signal as measured by blots with blocking buffer only (no primary antibody).
Rationale: Ensures background is minimal and consistent
FR-203: All liquid reagents shall be formulated with stabilisers providing ≥12 months shelf life at 4°C.
Rationale: Lot-to-lot and age-related reagent degradation is a major source of variability
The key principle: One UR typically drives 2-3 FRs, each addressing a different technical contributor to the user outcome. This is where R&D's creativity shines, they determine how to achieve the user's outcome.
Not every UR makes it into Milestone 1 (development phase gate). This is where many PMs struggle.
The mistake: Defining the timeline first, then descoping URs to fit it. You end up launching a half-baked MVP that doesn't actually solve the core user problem. Researchers won't buy it, no matter how "on time" you shipped.
The right approach: Define Tier 1 URs rigorously based on VOC, have R&D estimate the realistic timeline, then that becomes your commitment. If leadership pushes back, you make the trade-off explicit and quantified.
Tier 1 (Must Have): Solves the core user problem identified in VOC. Without it, researchers won't buy. R&D estimates realistic timeline.
Tier 2 (Competitive Advantage): Differentiates you but isn't dealbreaker if absent. Plan for v1.1-v1.2.
Tier 3 (Nice-to-Have): Addresses niche requests or feature requests from 5% of customers. Future releases.
You scope Tier 1 conservatively, only URs that VOC research shows are non-negotiable. R&D estimates 18 months. Leadership says "We need this in 12 months."
Your options aren't "cut features to fit 12 months." Your options are:
Leadership now chooses based on actual trade-offs, not wishful thinking.
Example: We had four candidate URs for an assay system. R&D estimated 20 months for all Tier 1. Leadership wanted 14 months. We didn't compromise URs, we did a deeper VOC analysis and discovered UR-104 (LIMS integration) was critical only for pharma (~30% of market), not academic researchers (~50% of market). We moved it to Tier 2, brought Tier 1 to 15 months, and shipped Tier 1 + Tier 2 together in 18 months after learning we could build in parallel. The MVP solved the core need for 80% of customers from day one.
Many product programmes verify functional requirements but never validate that FRs actually deliver on user requirements.
Gate 1: FR Verification (R&D-led)
Test each functional requirement against its specification.
Gate 2: UR Validation (PM-led with user testing)
Test whether meeting the FRs actually delivers the user outcomes.
Gate 3: User Need Validation (PM-led with market feedback)
Test whether solving the URs addresses the original user need.
Taking VOC and writing it directly as a functional requirement.
Example: Researcher says: "I need a multichannel pipette with 0.1 µL precision across all 8 channels."
Misguided PM writes: "FR-001: Device shall dispense 0.1-10 µL with ±0.1 µL accuracy across 8 channels simultaneously."
The problem: You never asked why they need that precision. Maybe they're trying to:
The fix: Always insert the UR layer.
Writing URs that prescribe the solution instead of the outcome.
Example:
Assuming VOC from one segment represents all users. Academic researchers wanted "publication-quality" visualisation. Pharma (40% of market) rejected it because SOPs required locked-down formats for regulatory compliance.
The fix: Segment your VOC. Write URs that are either:
Writing requirements without maintaining traceability to user value.
Every FR should complete this sentence: "We need [this technical specification] because it enables [this user outcome] which solves [this user need]."
Building for the loudest user, not the largest need. A KOL demanded a specific spectral unmixing algorithm. We spent 6 months implementing it. Post-launch survey: 4% of users ever used it.
Fix: Weight VOC inputs by frequency, intensity, revenue potential, and strategic fit.
The best requirement translations happen when PM and R&D work as partners, not in a serial handoff.
Phase 1: Joint VOC Review (PM leads, R&D participates)
Phase 2: User Need Interpretation (PM leads, R&D challenges)
Phase 3: UR Drafting (PM writes, R&D reviews)
Phase 4: FR Development (R&D writes, PM reviews)
Phase 5: Joint Validation Planning
Odysseus succeeded not by ignoring the Sirens' song, but by hearing it without letting it control his actions. He listened, learned, but stayed on course.
As Product Managers, we must do the same:
The rocks that sink product development programmes? They're not technical failures. They're translation failures. Building the wrong thing, beautifully.
Your job as PM isn't to be a transcriptionist, dutifully recording what researchers say they want. Your job is to be an interpreter, someone who understands both the language of users and the language of engineering, and can translate between them without losing meaning.
Do this well, and you won't just avoid the rocks.
You'll build products that researchers actually want to buy.