
Strivenn Thinking
Pull up a seat at our
digital campfire
where story, strategy,
and AI spark
new possibilities
for sharper brands
and smarter teams.
5 Struggles Building Business Cases for Innovative RUO Products & How to Overcome Them
By Jasmine Gruia-Gray
In Greek mythology, Cassandra was cursed with the ability to see the future perfectly, but no one would ever believe her. She'd warn of disaster, and people would dismiss her as alarmist or delusional. The curse wasn't that she was wrong; it was that nobody believed her even when she was right.
Every Product Manager (PM) in life sciences knows this feeling. You've got a genuinely compelling innovation, something that solves a real lab pain point or opens an entirely new workflow. Your technical team is excited. Early customers give it a thumbs up. You can "see" where this is going. But when you sit down to build the business case, reality hits differently than you imagined. Suddenly you're Cassandra, trying to convince Finance and Sales that you're not just enthusiastic, you're credible. And they're skeptical in ways that have nothing to do with your vision and everything to do with the numbers you can't easily defend.
Here are the five struggles that have tripped me up, and more importantly, what I've learned works.
1. Market Sizing in the Absence of Comparables (Planning Fallacy)
The Struggle: When you're creating something truly novel, say, a platform that didn't exist two years ago, there's no historical data to anchor on. How do you size a market that, technically, didn't exist before your product?
I've spent weeks building bottom-up models: counting labs, estimating adoption curves, extrapolating from adjacent markets. The problem? Every assumption feels like fiction, and Finance teams push back.
Without historical comparables, it's easy to fall into the planning fallacy: you can make the number whatever you want. I've seen PMs build cases claiming 40% CAGR because "this is totally new." That's not credibility; that's a confession you don't know. The cognitive bias is real, we're naturally optimistic about our own projects and terrible at anchoring to what actually happened in similar situations.
The Guidance: Start with the pain, not the market size. Quantify the exact problem you're solving, not the addressable market, but the magnitude of the inefficiency or cost it represents.
For example:
[Your RUO tool eliminates 40 hours of manual work per lab per year] X [There are 15,000 target labs globally]
= 600,000 hours recovered annually
If you make a conservative assumption that the value of the work is $40/hour then, the total problem being solved is estimated at $24M.
Then be transparent about an adoption assumption: "Assuming 10% penetration over 5 years based on similar platform adoption curves in genomics." This assumption acknowledges uncertainty while grounding the case in precedent.
2. Justifying Development Costs When ROI Timelines Are Uncertain
The Struggle: RUO products often take 18-36 months to develop given all the verification and validation required. But you're asking the organization to commit $5-10M before you truly know if the market will adopt at the pace you're projecting.
The tension is real: you need investment to validate the market, but you need market validation to justify the investment. It's a catch-22 that Finance teams are trained to spot.
I've made the mistake of front-loading all development costs into year one and then wondering why CFOs give me the look. "So you're spending $8M upfront with revenue trickling in starting year two?" Yeah, that's a hard sell.
The Guidance: Phase the business case around "learning gates", not just funding stages. Consider structuring your plan into a smaller chunk with lower risk investment and easy to understand go/no-go decision criteria. For example:
Phase 1 ($2M, 6 months):
- Prototype validation with 5 design partners
- Go/No-Go decision target: 80% of labs report this solves their stated problem.
This shifts the conversation from "Is this worth $8M?" to "Is $2M reasonable to de-risk $8M?" Phase 2 can then be about manufacturing cost effectively and another go/no-go target.
Also, separate capex from opex in your narrative. If you're building manufacturing capabilities, that's a different ROI conversation than pure software development. Investors need to see different cost structures clearly.
And here's the real conversation that drives home investment: show the "cost of inaction". What are competitors doing? What's the risk if we wait two years? Sometimes the best business case isn't about the product's potential, it's about market timing.
3. Proving Clinical/Biological Efficacy When Early Data Is Preliminary
The Struggle: Your tool works in the lab. Your early users think it's great. However, you're not running a controlled trial, and your customer feedback lacks statistical rigour. Finance wants confidence; you've got anecdotes.
The gap between "promising early data" and "defensible market assumption" can kill a business case. You'll have researchers saying things like, "This saved us a week on our workflow," but translating that into reliable ROI inputs is murky. One lab's game-changer is another lab's nice-to-have.
The Guidance: Invest in structured validation early, before you build the full business case. Work with 3-5 design partners (not cheerleaders, actual critical users) and run time-motion studies. Get specific, quantified feedback: How long does the current workflow take? How much does it cost? What happens after your tool is deployed?
Use that data to build confidence intervals, not point estimates. "Based on validation with 5 beta users, we project 20-40% efficiency gain, with a most-likely scenario of 28%." This is credible. It shows you've done the work without overselling.
Also, separate the adoption risk from the efficacy risk in your case. Your tool might be great, but labs might be slow to adopt new workflows because, for example, concordance data are required. These are different problems with different solutions, and conflating them is where many cases fall apart.
4. Competing Against "Good Enough" Incumbents
The Struggle: Even when your product is genuinely better, incumbent tools have a massive advantage: switching costs, training inertia, and existing relationships with labs. Your innovation might reduce analysis time by 30%, but if customers have already trained their staff and integrated the old tool into their pipeline, you're asking them to absorb real disruption costs.
For disruptive products (significant improvements to existing tools), the business case problem is that your ROI math assumes adoption, but adoption requires customers to accept short-term pain for long-term gain. Labs are risk-averse. They want proven tools, not innovation.
The Guidance: Build your business case around "total cost of ownership", not just the feature delta. Yes, your tool is faster. But what's the cost of migration, retraining, and validation? What's the risk if something goes wrong mid-migration?
When the customer sees a realistic picture:
"Your current solution costs $500K/year; ours costs $300K/year, but switching costs $80K and takes 3 months", they can make an informed decision. Often, they'll still choose you, but only if you're upfront about the friction.
Also, identify early adopter segments where switching costs are lower. New labs, labs expanding into new areas, or those already planning a platform refresh are warmer prospects. Build your case around that segment first, then expand.
5. Building Credibility When Revenue Assumptions Are Deliberately Conservative
The Struggle: Sales won't commit to aggressive revenue targets - they'll give you a "sandbag" number they're confident they can beat. It's smart risk management on their part; they don't want to inherit targets they think are unrealistic. But from a business case perspective, this creates a problem: your ROI looks worse than you actually believe it will be.
You know the real upside is higher. Sales knows it too. But the business case is locked in with conservative numbers, and Finance is evaluating the investment based on ROI that understates the opportunity. You're left in an awkward position: if you push back on Sales' assumptions, you look like you don't trust them. If you accept them, your case looks marginal and harder to justify.
The Guidance: Separate the investment decision from the revenue forecast. Build your case with two explicit layers:
- Why this product is worth funding from a strategic perspective: market access, competitive positioning, platform expansion, or cost avoidance. This is your baseline justification and doesn't depend on Sales' forecast.
- Sales' committed revenue assumptions, clearly labelled as conservative. Frame it as: "Sales has committed to X revenue; we believe upside exists beyond this, but this is their confident floor."
This approach does two things: It acknowledges that Sales' number is real and defensible (you're not undermining them), while also signalling to Finance that there's optionality in the case beyond what's been guaranteed. It's transparent.
Then, set up explicit tracking from day one. On a quarterly basis, track actual revenue against the forecast. When Sales inevitably outperforms, document it meticulously. Year 1 actual vs. forecast, Year 2 actual vs. forecast, Year 3 actual vs. forecast, because this becomes the credibility engine for your next business case.
When you go back to Finance asking for Phase 2 funding or expansion capital, you're no longer arguing from optimism. You're arguing from precedent: "We forecasted conservatively in the original case and beat it by 35%. Here's what actually happened, here's why we outperformed, and here's what that tells us about the next phase." Suddenly, your assumptions aren't hopeful; they're evidence-based.
This also aligns incentives beautifully. Sales commits to a number they can beat and becomes invested in overperformance. You've got documented proof that your forecasting is credible. Finance sees a team that delivers on commitments and uncovers upside. The next business case gets approved faster and with less friction because you've built a track record of realistic assumptions that turned into real wins.
The Real Lesson
Building credible business cases for innovative RUO products isn't about having perfect data, it's about being transparent about uncertainty while grounding your assumptions in evidence. The teams that win aren't the ones with the biggest numbers; they're the ones Finance and Sales actually trust, because they've admitted what they don't know and explained clearly why they believe what they do.
Your business case should start conversations, not end them.
Q: How conservative should my assumptions actually be? I feel like I'm constantly torn between what I believe will happen and what Finance will accept. ▼
A: Here's the tension point: "conservative" and "credible" aren't the same thing. A conservative assumption that's unsupported is just pessimism. Instead, anchor every assumption to something real. If you assume 15% year-one adoption, that should come from comparable products in your market, adjusted for your specifics. If you're claiming 40% gross margins, that should reflect your actual manufacturing or delivery model, not industry averages. The goal isn't to be conservative, it’s to be justified. Finance respects an assumption more when you can explain exactly why you chose it, even if they'd have chosen differently. That credibility buys you latitude later when your actual results diverge from the plan.
Q: At what point do I kill a business case? I keep iterating because I believe in the product, but I'm worried I'm just making it fit the narrative. ▼
A: If you're rebuilding the case more than twice because the numbers don't work, that's a signal to pause. Ask yourself honestly: Are the underlying assumptions changing because I've learned something new, or am I just adjusting inputs until the ROI looks acceptable? The hard truth is that some innovations aren't ready, not because they're bad ideas, but because the market timing, cost structure, or competitive landscape isn't aligned yet. Sometimes the right move is to table the case and revisit it in 12 months when more data exists. That's not failure; that's maturity. Your job is to build cases worth building, not to make weak cases look strong. If you can't build a credible case around defensible assumptions, that's information too.
Q: How do I position my business case when the real win isn't revenue, it’s strategic positioning or defensive moves against competitors? ▼
A: Name it explicitly. Don't bury a strategic rationale under revenue projections that don't make sense. If you're building this product because a competitor is moving into your space and you need a response, say that. If it's a platform play that sets you up for adjacent markets down the line, quantify that. Finance teams actually understand strategic bets, they just need you to be clear that this particular case isn't primarily about ROI in years one through three. Frame it as "investment in market position" rather than "revenue-generating product." Then build separate success metrics: customer wins in key accounts, share of wallet gains, time-to-market advantage. This approach earns trust because you're not overselling the near-term business; you're being honest about what you're optimizing for.