Strivenn Thinking

The Janus Threshold: When to Deploy AI in Product Development Research

Written by Jasmine Gruia-Gray | Jan 2, 2026 1:47:55 PM

Every Roman household kept a shrine to Janus near the doorway. Unlike the borrowed Greek gods, Janus was uniquely Roman, a god with two faces looking in opposite directions simultaneously. One face surveyed what lay ahead, the other what remained behind.

 

He governed all beginnings and transitions. Romans invoked his name first in every prayer because they understood a fundamental truth: you cannot cross a threshold wisely by looking in only one direction. You need to see the efficiency and promise ahead and understand what you're leaving behind.

 

As Product Managers (PMs) in life science tools, we now stand at our own Janus threshold. AI agents hold the potential to attend our Voice of Customer (VOC) interviews, summarise our ethnographic surveys, and compress hours of qualitative research into tidy bullet points. The forward-facing promise is seductive: more coverage, faster synthesis, freed capacity for strategic work. But what does the backward-facing perspective reveal? What are we trading away when we send a bot to cross the threshold in our place?

 

The Hidden Cost of Delegation

Recent neuroscience research from the NeuroLeadership Institute exposes an uncomfortable reality. When we delegate our presence at critical research moments to AI tools, we lose cognitive benefits that only occur when we're physically or virtually present during discovery.

 

Think about attending a customer interview. When the researcher describes their Western blot workflow frustration, your brain doesn't just record the words. It activates competing ideas, connects to your knowledge of pricing constraints and competitive products, notices contradictions between what they say and what their lab setup reveals. This mental activation creates the foundation for insight.

 

AI summaries eliminate this activation. You get conclusions without the cognitive journey that makes them actionable. You lose the shared understanding that builds when you ask clarifying questions in real time. These aren't luxuries. They're the cognitive infrastructure of product insight.

 

A recent study demonstrated the cost. 83% of individuals who used generative AI to write an essay struggled to remember the content of their work, whereas only 11% of those who used a search engine (or no tool at all) had the same problem. The implication for PMs is stark. If you're outsourcing the experience of VOC research to an AI summariser, you're degrading your ability to remember, synthesise, and act on what you learned.

 

The Janus Framework: Looking Both Ways Before Crossing

The question isn't whether to use AI in product development research. The question is how to deploy it without sacrificing the cognitive benefits that make us effective PMs. Here's a framework for standing at the threshold with both faces active.

 

Face One: The Promise of AI Augmentation

AI tools genuinely excel at specific research tasks. They can transcribe a 90-minute ethnographic survey of flow cytometry users with perfect accuracy. They can tag themes across 40 customer interviews faster than any human analyst. They can identify usage patterns in product telemetry data (automated data collection about how customers actually use your product) that would take weeks to surface manually.

 

Deploy AI for volume and velocity, not for understanding. When you're analysing the sixteenth interview about qPCR assay optimisation challenges, using AI to pull recurring complaint patterns is smart delegation. When you're conducting the third interview and still forming your mental model of the problem space, AI summarisation robs you of the mental activation that builds deep understanding.

 

Face Two: What We Lose When We Delegate

When you're physically or virtually present during customer research, three processes activate that don't occur when reviewing AI summaries:

  • Deep attention. Real-time participation forces focused attention on ideas and the social signals around them. Your brain encodes information more deeply when processing it in the company of others. We evolved to pay close attention to who's speaking, how others respond, and what that means for us. Being present activates mental circuits that summaries cannot replicate.
  • Connected thinking. When you hear a researcher explain their antibody validation frustration in real time, your brain triggers a cascade of related concepts. That cascade connects to your knowledge of competitive products, regulatory requirements, pricing constraints, and field application scientist feedback. This chain reaction enables you to find implications, recognise patterns, and integrate new information with existing knowledge. AI summaries short-circuit this process. You get the conclusion without the cognitive journey that makes it actionable.
  • Personal discovery. Every breakthrough in product strategy begins with an "aha moment" when someone notices a pattern others missed. That insight fuels motivation to act, whether solving a technical challenge, identifying a new market segment, or reconceiving the value proposition. When AI delivers pre-processed insights, you lose the motivating power of personal discovery. You might access the solution faster, but you won't feel the same urgency to champion it.

 

The PHASE Framework for Threshold Decisions

Before delegating any research activity to AI, apply the PHASE framework. Each question helps you look both forward and backward, like Janus standing at the threshold:

  • P - Purpose: Am I clear on what this research is designed to answer, or am I still forming the question? If you're still defining the problem space (early ethnographic work, exploratory VOC for a new assay category), you need to be present. Your ability to ask follow-up questions in real time is irreplaceable. AI can summarise answers, but it cannot reformulate questions based on unexpected responses.
  • H - Hypotheses: What assumptions or hypotheses am I carrying into this research, and how will I know if they're wrong? Presence forces you to confront disconfirming evidence in real time. When a core facility director explains why your assumed workflow doesn't match their reality, the cognitive dissonance you experience live is valuable. AI summaries smooth over contradictions you need to feel.
  • A - Action: Is this research generating evidence for a decision already made, or evidence that could change the decision? If you're validating a launch plan that's already committed, AI summarisation is appropriate. If you're deciding whether to pursue a new imaging platform, delegate at your peril. You need the full context, the hedges and uncertainties, the body language that reveals conviction or doubt.
  • S - Solutions: Could this research surface alternative solutions I haven't considered? Novel insights emerge from unexpected connections. When a researcher mentions a workaround they built using lab equipment you didn't know they owned, your brain connects that to pricing models, partnership opportunities, and competitive positioning. AI extracts facts. It doesn't make creative leaps.
  • E - Endorsement: Will I need to defend these findings or build internal support for action? If you're presenting VOC insights to R&D to deprioritise a feature they're attached to, you need the rich detail that comes from personal participation. "The AI summary said users don't care" carries less weight than "I've now heard 12 pharma researchers explain why this workflow step doesn't map to their reality."

 

After You Cross the Threshold: Where AI Delivers Strategic Value

PHASE determines whether you attend. Once you've crossed that threshold and participated directly in discovery, AI becomes genuinely powerful for what it does best: pattern recognition at scale and trend analysis across datasets.

 

Deploy AI for post-experience synthesis. After you've personally conducted 8-10 ethnographic sessions watching researchers run your multiplex immunoassay, use AI to analyse all session transcripts simultaneously. It will surface how often "plate-to-plate variability" appears as a complaint, which customer segments mention cost versus performance, and whether the pain points cluster by application type or lab size. You bring the contextual understanding from being present. AI brings the computational horsepower to find patterns you'd miss manually.

 

Use AI to cross-reference qualitative and quantitative data. You attended the VOC sessions and heard three pharma customers mention they're running your PCR kit "off protocol." Feed the interview transcripts into AI alongside your product telemetry data. AI can identify whether the off-protocol usage correlates with specific lot numbers, geographic regions, or application types. That correlation might reveal a manufacturing inconsistency or a market segment using your product in ways you never intended.

 

Let AI quantify your observations. You noticed during customer visits that researchers seem frustrated with your software interface, but you're not sure if it's a vocal minority or a widespread issue. AI can analyse support tickets, user forum posts, and net promoter score (NPS) survey comments to quantify how often "confusing UI" appears relative to other complaints. You provide the hypothesis from direct observation. AI provides the statistical validation across thousands of data points.

 

The principle is consistent with Janus: look both directions. Attend critical research moments yourself to build deep understanding. Then leverage AI to analyse, quantify, and cross-reference what you experienced at a scale your brain cannot match. The AI analysis tells you what customers complain about. The ethnographic presence reveals why it matters, when it's a dealbreaker versus a nice-to-have, and how they'd modify their workflow to accommodate the change. You need both perspectives.

 

Standing at the Threshold

Janus succeeded by refusing to prioritise one direction over the other. He looked forward and backward simultaneously, understanding that wisdom requires both perspectives. The Romans who built an empire knew that every beginning demands acknowledgment of what you're leaving behind.

 

As PMs, we stand at an unprecedented threshold. AI augmentation offers genuine productivity gains. But those gains become losses if we surrender the deep understanding that comes from direct participation in discovery.

 

Deploy AI for scale and speed. Reserve your presence for depth and insight. Ask the PHASE framework questions before delegating. And remember that some thresholds are worth crossing yourself.