In Greek mythology, Cassandra was cursed with the ability to see the future perfectly, but no one would ever believe her. She'd warn of disaster, and people would dismiss her as alarmist or delusional. The curse wasn't that she was wrong; it was that nobody believed her even when she was right.
Every Product Manager (PM) in life sciences knows this feeling. You've got a genuinely compelling innovation, something that solves a real lab pain point or opens an entirely new workflow. Your technical team is excited. Early customers give it a thumbs up. You can "see" where this is going. But when you sit down to build the business case, reality hits differently than you imagined. Suddenly you're Cassandra, trying to convince Finance and Sales that you're not just enthusiastic, you're credible. And they're skeptical in ways that have nothing to do with your vision and everything to do with the numbers you can't easily defend.
Here are the five struggles that have tripped me up, and more importantly, what I've learned works.
The Struggle: When you're creating something truly novel, say, a platform that didn't exist two years ago, there's no historical data to anchor on. How do you size a market that, technically, didn't exist before your product?
I've spent weeks building bottom-up models: counting labs, estimating adoption curves, extrapolating from adjacent markets. The problem? Every assumption feels like fiction, and Finance teams push back.
Without historical comparables, it's easy to fall into the planning fallacy: you can make the number whatever you want. I've seen PMs build cases claiming 40% CAGR because "this is totally new." That's not credibility; that's a confession you don't know. The cognitive bias is real, we're naturally optimistic about our own projects and terrible at anchoring to what actually happened in similar situations.
The Guidance: Start with the pain, not the market size. Quantify the exact problem you're solving, not the addressable market, but the magnitude of the inefficiency or cost it represents.
For example:
[Your RUO tool eliminates 40 hours of manual work per lab per year] X [There are 15,000 target labs globally]
= 600,000 hours recovered annually
If you make a conservative assumption that the value of the work is $40/hour then, the total problem being solved is estimated at $24M.
Then be transparent about an adoption assumption: "Assuming 10% penetration over 5 years based on similar platform adoption curves in genomics." This assumption acknowledges uncertainty while grounding the case in precedent.
The Struggle: RUO products often take 18-36 months to develop given all the verification and validation required. But you're asking the organization to commit $5-10M before you truly know if the market will adopt at the pace you're projecting.
The tension is real: you need investment to validate the market, but you need market validation to justify the investment. It's a catch-22 that Finance teams are trained to spot.
I've made the mistake of front-loading all development costs into year one and then wondering why CFOs give me the look. "So you're spending $8M upfront with revenue trickling in starting year two?" Yeah, that's a hard sell.
The Guidance: Phase the business case around "learning gates", not just funding stages. Consider structuring your plan into a smaller chunk with lower risk investment and easy to understand go/no-go decision criteria. For example:
Phase 1 ($2M, 6 months):
This shifts the conversation from "Is this worth $8M?" to "Is $2M reasonable to de-risk $8M?" Phase 2 can then be about manufacturing cost effectively and another go/no-go target.
Also, separate capex from opex in your narrative. If you're building manufacturing capabilities, that's a different ROI conversation than pure software development. Investors need to see different cost structures clearly.
And here's the real conversation that drives home investment: show the "cost of inaction". What are competitors doing? What's the risk if we wait two years? Sometimes the best business case isn't about the product's potential, it's about market timing.
The Struggle: Your tool works in the lab. Your early users think it's great. However, you're not running a controlled trial, and your customer feedback lacks statistical rigour. Finance wants confidence; you've got anecdotes.
The gap between "promising early data" and "defensible market assumption" can kill a business case. You'll have researchers saying things like, "This saved us a week on our workflow," but translating that into reliable ROI inputs is murky. One lab's game-changer is another lab's nice-to-have.
The Guidance: Invest in structured validation early, before you build the full business case. Work with 3-5 design partners (not cheerleaders, actual critical users) and run time-motion studies. Get specific, quantified feedback: How long does the current workflow take? How much does it cost? What happens after your tool is deployed?
Use that data to build confidence intervals, not point estimates. "Based on validation with 5 beta users, we project 20-40% efficiency gain, with a most-likely scenario of 28%." This is credible. It shows you've done the work without overselling.
Also, separate the adoption risk from the efficacy risk in your case. Your tool might be great, but labs might be slow to adopt new workflows because, for example, concordance data are required. These are different problems with different solutions, and conflating them is where many cases fall apart.
The Struggle: Even when your product is genuinely better, incumbent tools have a massive advantage: switching costs, training inertia, and existing relationships with labs. Your innovation might reduce analysis time by 30%, but if customers have already trained their staff and integrated the old tool into their pipeline, you're asking them to absorb real disruption costs.
For disruptive products (significant improvements to existing tools), the business case problem is that your ROI math assumes adoption, but adoption requires customers to accept short-term pain for long-term gain. Labs are risk-averse. They want proven tools, not innovation.
The Guidance: Build your business case around "total cost of ownership", not just the feature delta. Yes, your tool is faster. But what's the cost of migration, retraining, and validation? What's the risk if something goes wrong mid-migration?
When the customer sees a realistic picture:
"Your current solution costs $500K/year; ours costs $300K/year, but switching costs $80K and takes 3 months", they can make an informed decision. Often, they'll still choose you, but only if you're upfront about the friction.
Also, identify early adopter segments where switching costs are lower. New labs, labs expanding into new areas, or those already planning a platform refresh are warmer prospects. Build your case around that segment first, then expand.
The Struggle: Sales won't commit to aggressive revenue targets - they'll give you a "sandbag" number they're confident they can beat. It's smart risk management on their part; they don't want to inherit targets they think are unrealistic. But from a business case perspective, this creates a problem: your ROI looks worse than you actually believe it will be.
You know the real upside is higher. Sales knows it too. But the business case is locked in with conservative numbers, and Finance is evaluating the investment based on ROI that understates the opportunity. You're left in an awkward position: if you push back on Sales' assumptions, you look like you don't trust them. If you accept them, your case looks marginal and harder to justify.
The Guidance: Separate the investment decision from the revenue forecast. Build your case with two explicit layers:
This approach does two things: It acknowledges that Sales' number is real and defensible (you're not undermining them), while also signalling to Finance that there's optionality in the case beyond what's been guaranteed. It's transparent.
Then, set up explicit tracking from day one. On a quarterly basis, track actual revenue against the forecast. When Sales inevitably outperforms, document it meticulously. Year 1 actual vs. forecast, Year 2 actual vs. forecast, Year 3 actual vs. forecast, because this becomes the credibility engine for your next business case.
When you go back to Finance asking for Phase 2 funding or expansion capital, you're no longer arguing from optimism. You're arguing from precedent: "We forecasted conservatively in the original case and beat it by 35%. Here's what actually happened, here's why we outperformed, and here's what that tells us about the next phase." Suddenly, your assumptions aren't hopeful; they're evidence-based.
This also aligns incentives beautifully. Sales commits to a number they can beat and becomes invested in overperformance. You've got documented proof that your forecasting is credible. Finance sees a team that delivers on commitments and uncovers upside. The next business case gets approved faster and with less friction because you've built a track record of realistic assumptions that turned into real wins.
Building credible business cases for innovative RUO products isn't about having perfect data, it's about being transparent about uncertainty while grounding your assumptions in evidence. The teams that win aren't the ones with the biggest numbers; they're the ones Finance and Sales actually trust, because they've admitted what they don't know and explained clearly why they believe what they do.
Your business case should start conversations, not end them.