Senior UX Researcher · 6+ years
Bad bets on what users want are expensive. I find out what's actually true — what users need, where designs break down, and what's worth building next.
I've done this across AI, SaaS, crypto, and consumer products, from early discovery through rigorous design evaluation to post-launch diagnosis. The output is always the same: evidence your team can act on.
Here's what it looks like when research is working — before you even open a case study.
Before development
Engineering time is the most expensive thing you have. I run fast usability tests on prototypes so critical flaws surface in a week — not after a full sprint.
De-risked ship, fewer late-stage pivotsAt the start of a new product or feature
When you're entering a new space, I talk to users, map their real workflows, and come back with a clear picture of which problems are worth solving — and which aren't.
Roadmap shaped by evidence, not assumptionsWhen users are dropping off
If users sign up but don't stick around, I find the exact friction points in the experience and give the team a ranked list of what to fix, with evidence behind each call.
Clear priority fixes, tied to business metricsWhen stakeholders disagree
Teams argue about what users want. I design the right study to answer the contested question — so the team can align around facts instead of opinions.
Faster decisions, shared directionWhen entering a new market or user segment
Launching into healthcare, crypto, or enterprise without deep user knowledge is a gamble. I surface mental models, trust barriers, and unmet needs before you commit.
Confident market entry, fewer misstepsWhen research insights stall in a doc
Insights are only valuable if they change something. I deliver findings in the format that moves your team — whether that's a one-pager for a founder or a structured brief for a product review.
Insights that influence roadmap, not just archiveEnd-to-end research across discovery, evaluation, and post-launch diagnosis.
Uncovering user mental models and unmet needs to shape the product strategy for a generative AI assistant before development began.
View case study →Diagnosing where and why users dropped off during onboarding — and delivering a ranked set of fixes tied directly to retention metrics.
View case study →Applying a JTBD framework to understand what clinicians and administrators actually needed from a predictive analytics platform — and what they didn't.
View case study →Evaluating the onboarding experience for a crypto wallet — identifying trust barriers and friction points that prevented new users from completing setup.
View case study →Strategic influence first. Methods and tools in service of that.
What I influence
How I find answers
I choose methods based on the question, not habit. Qualitative for depth, quantitative for scale — often both.