logo Contact
MVP & Diana

Strivenn Thinking

Pull up a seat at our
digital campfire
where story, strategy,
and AI spark
new possibilities
for sharper brands
and smarter teams.

 

Product Marketing

One Arrow, One Answer: The 3 Critical Lessons for Building a Minimal Viable Product

By Jasmine Gruia-Gray

Diana and the Discipline of the Single Shot

Diana, Roman goddess of the hunt, was never celebrated for the volume of arrows she released. She was feared for the precision of the one she chose. Where lesser hunters scattered a quiver's worth of effort across the forest floor hoping something would fall, Diana studied her quarry, read the terrain, and released a single arrow at the exact moment and angle to achieve the outcome she required. Nothing more. Nothing wasted.


Her power was not restraint for its own sake. It was the recognition that every additional arrow fired without certainty was effort diverted from the one shot that mattered. Diana's mastery lay in defining exactly what she needed to know before she released anything at all.

 

As Product Managers (PMs) developing RUO life science tools, we face exactly this tension. The engineering team's push for additional detection channels is legitimate. R&D's insistence on automation reflects real researcher workflow requirements. Sales is right that application notes accelerate adoption. Every one of these inputs is grounded in customer reality. The discipline of minimum viable product (MVP) is not dismissing them. It is sequencing them so the core hypothesis gets tested before the quiver empties.

 

The Over-Engineering Trap in Life Sciences

MVP is not a startup concept. It is a precision instrument. Eric Ries defined it as the version of a product that collects the maximum amount of validated learning with the least effort (Lean Startup Co.). The critical word is "validated." You are not shipping something incomplete. You are shipping the minimum specification that addresses the maximum number of customer needs well enough to generate real-world evidence for your next decision.

 

Life sciences makes this harder than most sectors. Researchers are trained to optimize. Your internal R&D team is trained to optimize. The instinct to add one more assay configuration, one more validated reagent lot, one more software feature before release is baked into the scientific culture you operate inside. And it is killing your time-to-market.

 

Here is what nobody says out loud in the phase gate meeting: scoping down feels like letting your team down. When an R&D scientist has spent six months perfecting a reagent formulation, asking them to release it for fewer applications than it can support is a genuinely hard conversation. That discomfort is real and worth acknowledging. It does not change the math, but it does change how you lead the conversation.

 

A qPCR platform that ships eighteen months late because the team insisted on validating twelve primer sets before launch does not just miss the market window. It burns the runway and delivers zero validated learning in the interim.

 

Three Critical Lessons from Diana's Quiver

Lesson 1: Define the Hypothesis Before You Define the Spec

Diana did not decide how many arrows to bring based on how many deer were in the forest. She decided based on what she needed to know from the hunt. Your MVP spec must start the same way: with the hypothesis you are testing, not with the feature list you are building.

 

For a life sciences PM launching a minimal western blot imaging system, do not anchor the hypothesis to film sensitivity comparison. That benchmark has been contested for over a decade and will pull your alpha program into an argument rather than a learning exercise. Instead, anchor to a measurable customer outcome: "Core facility directors will recommend this system to incoming lab groups if it delivers reproducible quantitation across three antibody targets without protocol optimization." That hypothesis drives a tight MVP spec. Everything else, automated band analysis, multi-channel fluorescence, LIMS integration, is a second arrow you have not yet earned the right to fire.

 

Lesson 2: Your Customers Define Viable, Not Your R&D Team

The Kano Model is worth fifteen minutes of your time before your next MVP scoping session. Developed in 1984 by Professor Noriaki Kano of the Tokyo University of Science, it gives product teams a structured way to categorize requirements based on how each one actually drives customer satisfaction, rather than treating every feature as equally important. If the framework is new to you, the core idea is straightforward: not all requirements are created equal, and shipping more does not automatically mean satisfying more.

 

Kano organizes requirements into three categories. Must-haves are the baseline requirements customers never explicitly ask for because they assume them. Their absence creates immediate dissatisfaction; their presence simply avoids it. Performance attributes scale linearly: the more you deliver, the more satisfied the customer becomes, and they will compare you directly against competitors on these dimensions. Delighters are features customers did not know to ask for but respond to with genuine enthusiasm when they encounter them. The critical insight for MVP scoping is this: must-haves and one targeted performance attribute define your viable product. Delighters are deliberate second arrows.

 

Applied to a cell viability assay kit targeting pharma R&D teams running high-throughput toxicity screens, the Kano map looks like this:

 

Kano Category   Example Requirements   MVP Decision
Must-Haves   CV <5% across target cell lines. Validated protocol. Reagent stability >12 months.   Non-negotiable. MVP ships with all of these fully  validated.
Performance   Compatibility with additional cell lines. Throughput per plate. Signal-to-noise ratio.   Select one dimension your beachhead segment  values most. Build to that bar.
Delighters

 

Automated data export. Multiplexing capability. Kit customization options.

 

Hold for iteration 2. Do not scope into MVP.  Guard against scope creep.

 

The failure pattern is almost always the same: a team validates 23 cell lines when their target segment runs four. The additional 19 validations are not wrong, they are just performance attributes for a segment the team has not yet won. Fourteen months and $400K of validation effort, pointed at the wrong Kano category for the beachhead customer, produces no useful learning.

 

Researchers do not evaluate products by counting features. They evaluate by asking one question: does this solve my problem reliably enough to trust in my workflow? Viable means trustworthy for the core use case. Kano tells you exactly which requirements define that trustworthiness for your specific segment. Let it do that work before you finalize scope.

 

Lesson 3: Validate Fully, Scope Tightly

Here is where life sciences PMs frequently misread MVP: minimum viable does not mean minimum validated. Your MVP must clear the same quality bar as any product you release. Reproducibility, stability, regulatory compliance for its intended use, and safety are non-negotiable regardless of scope. What changes in an MVP is the breadth of applications validated, not the rigor of the validation itself. A stripped-back NGS library prep kit still needs to perform reliably on the sample types it claims to support. The MVP decision is which sample types make the cut, not whether the validation is thorough.

 

The distinction matters most when evaluating line extensions. A true line extension, adding a new sample type to an existing validated kit, may not warrant a full MVP cycle at all. The product architecture is proven; you are validating a specific application claim. Reserve the MVP framework for genuinely novel product categories where the core value proposition is unproven.

 

Diana did not fire a poorly aimed arrow and call it a minimum viable shot. She fired one precisely aimed arrow. The minimum is in the number of arrows, not the quality of the aim.

 

The Diana Framework: Your MVP Aide-Memoire

Perfect market intelligence is not a prerequisite for using this framework. Diana herself hunted in uncertain terrain. These five checkpoints are most valuable precisely when your customer data is incomplete, because they force you to articulate what you know versus what you are assuming. Run this before your development kickoff. If you cannot answer all five, you do not have a ready scope, but you do have a clear picture of where your assumptions are exposed.

 

Checkpoint The Question to Answer
Target What is the single hypothesis this MVP must validate?
Quarry Which customer segment defines 'viable' for this product?
Arrow What is the minimum scope that satisfies their core requirement at full quality?
Release What observable customer behaviour will confirm the hypothesis?
Quiver Which features are second arrows, held back for the next iteration?

 

Back to the Forest

Diana's hunting was never about minimalism as a philosophy. It was about precision as a strategy. She fired one arrow because she had done the work to know that one arrow, placed correctly, would deliver the outcome she required. And when she fired it, she did not fire it at reduced force. She fired it with everything she had, at exactly the right target.

 

The most common MVP failure in RUO life science tools is not shipping too early. It is shipping the wrong scope: over-engineered for applications researchers do not yet need, and under-specified for the single workflow that would drive adoption. Like a hunter who loads a full quiver but cannot identify the quarry, the problem is never how many arrows you carry. It is whether you have done the work to know which one to release.

 

Define your hypothesis. Let customers define viable. Validate fully within a tight scope. The quiver can wait.

 

Q: My stakeholders keep pushing back on MVP, saying we need full validation before any release. Are they wrong? ▼

A:

They are not wrong about validation. They may be conflating scope with rigor. An MVP in life sciences must meet the same quality and validation standards as any released product. What changes is the breadth of the claims you are making, not the depth of evidence supporting those claims. If your MVP supports three sample types, those three sample types must be fully validated. The MVP decision is which three to prioritize, not whether to cut corners on the validation itself.

 

The pushback you are more likely to encounter is on scope, and that is a legitimate debate worth having on evidence. Bring the Kano analysis. Map which requirements are must-haves for the target segment versus which are performance attributes your early adopters have not yet asked for. When stakeholders can see that a specific feature is not a must-have for the beachhead customer, the scope conversation shifts from opinion to data.

Q: Researchers at our target accounts keep telling us they need more features before they will adopt. How do I reconcile that with MVP thinking? ▼

A: 

Probe the request before you expand the spec. Researchers rarely frame their needs as feature lists. When they say they need more, they typically mean the product does not yet solve their specific problem reliably enough. That is a different diagnosis. A researcher who says "I need automation" is often really saying "the manual steps in your current workflow create too much variability for my sample throughput." Those are solvable with different MVP choices than a full automation build.

 

Run a structured discovery session with your three to five most vocal requestors. Ask them to describe the last time a research tool actually changed how they worked. The features they cite will tell you exactly which must-haves your MVP is currently missing. You will rarely find that the gap is comprehensive feature coverage. You will almost always find it is one or two specific performance attributes that are not yet hitting the bar for their workflow.