← Back to portfolio

Consumer App · Evaluative · Retention

When the obvious research path was blocked, finding another way to answer the retention question

Day 1 retention was tanking. The instinct was to talk to users who left — but low market share made lapsed users impossible to recruit. Rather than stall, I reframed the question entirely: instead of asking why users left, I tested whether users could perceive the browser's core value before they had a reason to leave. That pivot unlocked the evidence the team needed to move from debate to action.

My role
End-to-end UX Research
Type
Evaluative · Hypothesis testing
Sample
n=120, between-participants
Collaborators
Product, Design, Growth
Research impact

The challenge

Day 1 retention was a critical business problem. Analytics showed a significant drop in users on their first day after installing the browser. The natural first move was an uninstall survey — ask people who left why they left. The results came back ambiguous: users simply said they preferred another browser. Technically accurate, but not actionable. It told us nothing about what specifically failed during their experience or what could be fixed.

The conventional next step would have been a qualitative deep-dive — recruiting lapsed users for interviews to uncover the real reasons behind "I just prefer Chrome." But the browser's low market share made recruiting lapsed users effectively impossible. There weren't enough of them to find, and those who had uninstalled had no reason to re-engage for a research session. The standard research path was blocked.

Meanwhile, the design team had built a new onboarding experience they believed would fix the problem. Leadership wasn't ready to commit without evidence — and the team was stuck in iterative debate with no clear way forward.


Reframing the question

Rather than waiting for a recruiting solution that didn't exist, I reframed the research question entirely. If we couldn't study users after they left, we could study users before they had a reason to leave — specifically, whether new users could perceive the browser's core value during their earliest moments with the product.

This wasn't a compromise. It was a more precise question. Previous research had already pointed to two root causes of drop-off: users weren't perceiving core value after initial exploration, and the onboarding flow felt random and unpredictable. If those were the real drivers of churn, then testing whether the new design fixed them was a direct test of the retention hypothesis — without needing a single lapsed user in the room.

I worked with design, product, and growth to define four specific attributes the new design aimed to improve: perceived core value, identity association, navigation clarity, and early task completion. These became the measurement framework — turning a vague "does it feel better?" question into something the team could evaluate, debate, and act on with confidence.


Study design

With the question reframed, I designed a comparative evaluation study — the fastest way to produce statistically defensible evidence under time pressure. New users were the right population: available, motivated, and experiencing the onboarding fresh. The between-participants design meant each person saw only one version, eliminating order effects that could contaminate the comparison.

Methodology detail

Unmoderated testing, between-participants (n=120): control group on shipped onboarding, treatment group on new design. Criterion sampling based on power user personas — usage patterns, past behavior, and value recognition. Mann-Whitney U Test for non-parametric comparison of user attitude across the two conditions. Journey maps used to track attention, emotion, and behavior at each onboarding step.


What we were testing — and why it mattered

The study was designed around two core questions the team had struggled to answer without evidence:

Q1
Does the new design help users understand what the browser is for?

Previous research showed users left because they couldn't see the value of switching to a new browser. If the new onboarding couldn't fix that, no amount of polish would improve Day 1 retention.

Q2
Is the improvement significant enough to justify committing resources to ship it?

Directional signals weren't enough — leadership needed statistical evidence. The study was designed to produce a verdict, not a recommendation: does the data support shipping this design or not?


What the data showed

The new design won — clearly and measurably. Findings were presented to design, product, and growth stakeholders in a dedicated review session structured around implications and decisions, not just results.

3 / 4
Targeted attributes where the new design scored significantly higher
Statistically significant improvement in perceived core value and navigation clarity
Alternative hypothesis accepted — results were not due to chance
01
Users understood the browser's value earlier

In the new design, participants identified the browser's core purpose faster and with more confidence. The value that previously wasn't landing during initial exploration was now registering at the right moment in the flow.

02
Navigation felt intentional, not random

The new flow addressed the "random and messy" pattern from previous research. Participants moved through onboarding with fewer wrong turns and less confusion about what to do next.

03
One attribute still needs work

The fourth targeted attribute — identity association — didn't improve significantly. Rather than treat this as a failure, the team flagged it as the next design challenge, with a dedicated exploration now on the roadmap.

04
The evidence ended the debate

Before this study, the team was caught in iterative tweaks without a clear direction. The comparative data gave leadership a concrete basis to decide: adopt the new design, refine the one remaining gap, and move toward development.


From findings to action

The study didn't just produce a report — it produced a clear, agreed-upon plan that the design, product, and growth teams aligned on immediately after the insight review.

Polish the new design based on specific research findings

Particular attention to the moments where users still showed confusion, surfaced through journey mapping

Continue exploring the fourth attribute — identity association

Dedicated design exploration now on the roadmap, informed by where the current design fell short

Move the refined design into development

Leadership committed resources with the new design as the preferred flow — not a candidate for further debate

A/B test on Day 1 retention as the next measurable milestone

Onboarding version as the controllable variable; Day 1 retention rate as the outcome metric — closing the loop between this research and the original business problem

Why the A/B test matters

This study proved the new design performed better in a controlled research setting. The A/B test will prove it in the wild — connecting usability improvement directly to the retention metric that started this project. That's the full loop: analytics surfaced the problem, research diagnosed it and validated a solution, live testing will confirm the business impact.


Looking back

The most consequential decision in this project wasn't methodological — it was recognizing when to stop pursuing the ideal study design and find a better question instead. The instinct to recruit lapsed users for qualitative interviews was correct in theory. Pursuing it despite an insurmountable recruiting constraint would have stalled the team for months with nothing to show for it.

Reframing from "why did users leave?" to "can users perceive value before they have a reason to leave?" wasn't a concession — it was a sharper question that produced more actionable evidence. The uninstall survey told us users preferred other browsers. The reframed study told us exactly which moments in the experience were failing to make the case for this one.

What I'd carry forward: treat ambiguous exit data as a signal to reframe the research question, not just a gap to fill with more data collection. The lapsed user problem is common in low-market-share consumer products — having a playbook for working around it is increasingly essential.