The Right Friction Converts Better Than None

TL;DR
- The "remove all friction" orthodoxy is wrong. Intentional friction that helps users understand why a product is for them consistently increases conversion, not decreases it.
- The mechanism: good friction creates self-selection. Users who pass through it arrive with higher intent, better context, and clearer expectations. They convert and retain at higher rates.
- AI products need this more than most, because the cold-start problem is severe and users who don't understand what the product can do churn before they experience value.
Adding the right friction at the right moment in the user journey increases conversion. Intentional onboarding steps that help users understand why a product is for them consistently outperform frictionless paths on activation, retention, and paid conversion.
Every product team I've worked with has the same reflex. When conversion drops, strip friction. Remove steps. Shorten forms. Get the user to the product as fast as possible.
That reflex is right about half the time. The other half, it's actively destructive.
Why is the "remove all friction" advice only half-right?
The "remove friction" playbook earned its reputation honestly. Most friction in most products is genuinely unnecessary. Redundant form fields, confusing navigation, slow page loads, password complexity requirements that frustrate without meaningfully improving security. Removing these is almost always the right call.
The problem is that the industry generalised from this specific truth to a universal rule: less friction is always better. That generalisation breaks down the moment you encounter friction that serves the user, not just the process.
Think about it from the user's perspective. When someone signs up for a new product, they're holding a question: "Is this thing for me?" If the answer isn't clear, they bounce. Not because the signup flow was too long. Because nothing in the experience told them why they should care.
A well-designed onboarding quiz answers that question. "What brought you here today?" isn't a pointless form field. It's the product saying "I want to understand your situation so I can show you the right thing." The user feels heard. The product gets signal. The downstream experience improves because the product knows something about who it's serving.
What makes onboarding friction good instead of bad?
Not all friction is good. The distinction matters, and it's testable.
1. Does it help the user understand that the product is for them?
A quiz asking "What's your primary goal?" helps the user self-identify. They see a category that matches their situation and think "Yes, that's me." That moment of recognition creates intent. It's the difference between a cold visitor browsing and a warm prospect who believes the product understands their problem.
A CAPTCHA does not create this moment. A terms-of-service checkbox does not create this moment. An email verification step does not create this moment. These are necessary frictions (or unnecessary ones, depending on your tolerance), but they're not informative.
2. Does it feed downstream personalisation?
Good friction generates signal that makes the rest of the experience better. If the onboarding quiz tells you the user is a solo operator running a beauty salon, you can show them booking templates instead of team management features. You can send onboarding emails about solo-operator workflows instead of enterprise playbooks. You can target lookalike audiences in paid acquisition based on the profile clusters that convert best.
This downstream value is the part most teams underestimate. The quiz isn't just an onboarding step. It's the first data point in a personalisation chain that compounds across the entire customer lifecycle.
3. Has it been tested against the frictionless path?
Good friction must earn its place through data. Run the experiment. Show half your users the quiz and half the frictionless path. Measure not just sign-up completion but activation, retention at 7 and 30 days, and conversion to paid.
If the friction version wins on downstream metrics even while losing slightly on initial sign-up completion, keep it. The users who bounce because of a two-question quiz were unlikely to activate anyway. The users who complete it arrive with higher intent and better context.
Why do AI products need onboarding friction more than most?
The cold-start problem in AI products is uniquely severe.
When someone opens a new AI tool for the first time, they face a blank canvas. Unlike a traditional SaaS product where the interface suggests actions (here's your inbox, here's your dashboard, here's your to-do list), many AI products present an empty text box and the implicit instruction: "Ask me anything."
Most users don't know what to ask. Their instinct is to test with something trivial: "What's the weather?" or "Write me a poem." They get a competent but unremarkable response, conclude the product is a chatbot, and leave. They never discover the capabilities that would actually change how they work.
This is the AI usage gap at the onboarding level. The product can do extraordinary things, but the user's mental model at sign-up doesn't include those things.
Friction solves this. A brief onboarding flow that asks "What kind of work do you do?" and "What's the most repetitive part of your week?" creates two benefits simultaneously. It gives the product signal for personalisation. And it primes the user to think about the product in terms of their actual work, not in terms of party tricks.
Vertical SaaS products demonstrate this well. A salon booking platform that asks new owners a few questions during onboarding (how many staff, what services they offer, whether they take online bookings) looks like unnecessary friction. The data says otherwise. Users who complete the onboarding quiz activate at a significantly higher rate than those who skip it, because the quiz shapes their expectations and surfaces the features most relevant to their setup.
The same pattern appears in trades contractor software. A solo plumber sees a different initial experience than a builder managing a crew. That personalisation starts at the friction point. Without it, both users see the same generic surface and neither feels like the product was built for them.
How to distinguish informative friction from annoying friction
The distinction is simple to state and surprisingly easy to miss in practice.
Friction that informs answers the user's implicit question: "Is this for me?" It creates a moment of recognition, generates useful signal, and improves the downstream experience.
Examples: onboarding quizzes that recommend features, industry or role selectors that customise the dashboard, brief tutorials that demonstrate a capability relevant to the user's stated goal.
Friction that annoys serves the product's needs without creating value for the user. It slows them down without improving their understanding or their experience.
Examples: email verification before the user has seen any value, mandatory phone number collection, multi-page forms that collect data the product never visibly uses, CAPTCHAs at every step.
The tricky cases are steps that serve both purposes. An onboarding flow that collects business size (useful for personalisation) and business address (useful for the company's lead gen but not for the user) combines informative friction with annoying friction. The right answer is to keep the informative parts and cut the parts that serve only the company.
How does onboarding friction compound downstream?
The single most underappreciated aspect of good onboarding friction is how far its value extends beyond the initial conversion event.
When you understand who a user is at sign-up, you unlock:
- Lifecycle messaging. Instead of sending the same generic onboarding sequence to everyone, you send targeted content based on the user's stated goals and profile. Open rates go up. Activation goes up.
- Ad targeting. The profile data from onboarding feeds lookalike audiences. You acquire more users who resemble the ones that convert best, not just the ones who click most.
- Feature prioritisation. When you know which user segments are growing fastest, you can allocate product effort toward the features those segments need. This sounds obvious, but most teams are guessing because they don't collect the signal at onboarding.
- Churn prediction. Users who stated a specific goal at sign-up but haven't used the corresponding feature within two weeks are predictable churn risks. Without the onboarding signal, they're invisible.
The initial friction step is the seed for all of this. Skipping it doesn't just lose you sign-up context. It blindfolds every downstream system that would have used that context.
Run the experiment
If you've never tested adding a brief onboarding quiz, you're optimising in the dark.
The setup is simple. Create a two-to-three question flow that identifies the user's role, primary goal, or industry. Use the answers to personalise their initial experience, even minimally (a different welcome message, a different set of suggested actions, a different onboarding email sequence).
Run it against your current flow for two weeks. Measure sign-up completion, activation (whatever that means for your product), 7-day retention, and conversion to paid.
If the friction version loses on sign-up completion but wins on downstream metrics, you've found informative friction. Keep it and iterate on it. Accepting lower sign-up completion is a form of deliberately leaving money on the table that grows back through retention.
If the friction version loses on everything, you've found annoying friction. Cut it.
Either way, you've learned something most teams never test because the orthodoxy prevents the experiment from happening.
Frequently Asked Questions
Won't users drop off if I add steps to the sign-up flow?
Some will. That's the point. Users who drop off because of a two-question quiz were unlikely to activate. The users who complete it arrive with higher intent and convert to active users at a higher rate. Measure downstream metrics, not just sign-up completion.
How many questions is too many?
Two to three questions is the sweet spot for most products. Each question should directly improve the user's experience (through personalisation or feature recommendation). If a question only serves internal analytics without visibly improving the user's experience, cut it.
What if my product serves a single use case? Do I still need a quiz?
Even single-use-case products benefit from understanding the user's experience level and primary motivation. A project management tool might ask whether the user is managing a team or personal projects. The answer changes which features to surface first, even if the underlying product is the same.
Related: Latent Demand Is Your AI Roadmap and The AI Usage Gap Is a Product Architecture Problem
Logan Lincoln
Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms. Written from experience AI UX at OpenChair.


