← Back to Portfolio

Brave Leo AI Exploratory Research

Brave Leo is a smart AI assistant built right into your browser. Ask questions, summarize pages, create new content, and more. Privately. Learn more

Role: UX Research

Stage: End-to-end

Tags: #generative AI #exploratory #early-stage #mixed-methods #roadmap #stakeholdermanagement #businessgoals

In retrospective, witnessing the generative AI industry's progress in refining their products and services based on insights similar to those uncovered during this research reinforces confidence in the quality and relevance of the work.

Leo example

Background

In 2023, as generative AI took the tect industry by storm, the board and the executive team decided to jump on the opportunity to integrate this emerging technology to make our product competitive and for the long-term development.

Constraints

Time! The board and the executive team want to release the features soon to gain market awareness and stand out among competitors. We had 3-4 months to ship and early version of Brave Leo AI.

MVP Approach

With the constraints in mind, the team decided to build an early version of Leo that provides the most wide use cases while simultaneously exploring heavy-lifting tasks. After MVP, we will continue with more backend building incorporating users, system, and the evolution of the technology itself.

Leo Research Roadmap

As we moved on to develop Brave Leo AI, I also developed and maintained a research roadmap based on

  1. stakeholders discussion on product strategies
  2. previous studies and existing analytics data

Research Process

Research Process

Stakeholder Alignment

Stakeholder Alignment

To navigate the ambiguity in the early version, I met with the key stakeholders - product, design, ML engineers, and later on marketing, to understand their thoughts, concerns, and assumptions. I also conducted competitor analysis and secondary study to gain a deeper understanding of this space. All of the above helped framing the objectives of the research.

Preliminary Studies

The preliminary studies helped us define recruiting criteria and informed the scope of the preliminary research.

Survey Data

Survey data by research firm McKinsey & Company

Secondary Study

To gain contextual understanding and information on user segmentation, I conducted a secondary study that revealed key information on industry adoption, unaddressed risks, primary user motivations, and the potential areas of application for generative AI.

Competitor Analysis

To understand the landscape of existing solutions, the product manager and I also looked into similar services with AI technology to draw ideas and uncover gaps in the solutions the competitors offered.

Research Objectives

  1. Uncover user behavior, perceived value, motivation, and unmet needs of their current experience with gen. AI, pzrticlulary but not limited to
    1. Search vs. Gen AI
    2. Conversation history
  2. Explore user attitude of AI privacy.

Research Methods

  • Survey design - to identify target user segments who are mostly likely to benefit from the MVP, explore use cases, validate assumptions, and prioritize features.
  • Semi-structured interview and observation - to deep dive in users’ experience with gen. AI, gathering data on a reflective level, and to observe how participants' interact with the AI tool

Research Design

Phase 1. Survey design

The recruiting screener was released on identified platforms including LinkedIn and to a group of randomly selected Brave users for a diverse outreach.

Recruiting criteria (sample size n=600)

  1. Established Brave users*
  2. Use generative AI tools

Survey design outline

  1. Gen. AI awareness and experience
  2. Frequency of usage
  3. Key user tasks
  4. User motivation and goals
  5. Perceived importance of functionalities and intended value
  6. Open-ended question for more context on the usage and attitude
  7. Demographics

*The study aimed to study the people who fit into the Brave personas to prioritize the value for Brave users and potential Brave users given the overall product is the Brave browser.

Phase 2. Semi-structured interview and observation

Based on the findings of phase 1 survey design research, the PM and I preliminarily segmented the target participants into 2 segments based on the key user tasks, which informed the sample size for the interview.

Recruiting criteria (sample size n=600)

  1. People who use gen. AI as a general chatbot
  2. People who use gen. AI to write

Conducted remotely, participants were asked to review and sign the User Research Non-Disclosure Agreement . Data handling details and screen-sharing expectation were shared with the participants prior to scheduling the session.

Research design outline

  1. Intro. Build report, set expectations, create a good environment for participants.
  2. Tell me about your general experience with AI chats.
  3. What do you expect to achieve from the AI tools?
  4. I’m interested in learning about the entire flow of you using AI chats, find an example in the past 2 weeks and walk me through what happened.
    1. [Likert scale] How would you rate the trustworthiness of gen. AI chats results?
  5. What did you do when you felt low confidence in the results? Walk me through an example in the past a few weeks.
  6. What has not worked? What is hard to do?
  7. [Observe] Show me how you normally use AI chatbots/tools.
    1. tools and systems involves during a task (If ok walk me through some of your history.)
    2. AI chat setup and how to access
    3. the format of prompt
  8. Outro

Data Analysis

Phase 1. Survey design

Data analysis

Friedman Test - The survey employed 7-point Likert Scale questions to measure user perceived importance of the potential functionalities. I then used the Friedman Test to understand whether there is statistically significant difference in the perceived functionalities.

Descriptive statistics and cross-tabulation - uncovered trends and segmented users

Data visualization - provided clarity, accessibility, and a visually intuitive way to explore and communicate survey insights

Phase 2. Semi-structured interview and observation

Data analysis

Thematic analysis - identified recurring themes across interviews and observations and used the Affinity Mapping framework to cluster related ideas

Coding - used Dovetail to label data with descriptive codes that represent core ideas

Data synthesis - synthesized key findings with supporting evidence, mapped themes to actionable insights ensuring alignment with research objectives

Data Analysis

Key Insights

I scheduled a meeting with all the key stakeholders for a detailed insight review where they could better digest and discuss them. This version of key insights tells the story and includes multi-media data.

TL;DR

  1. Integrate AI with search for in-depth information and results validation. (i.e. embedded links leading to search results, quick path, etc.)
  2. Enhance the organizational structure to improve efficiency and accessibility. (i.e. folder system, labels for quick recognition, etc.)
  3. Help users effectively collaborate with AI. (i.e. prompt-enhancer, pre-set elements, etc)
  4. Enhance AI's personalization to bring users more value. (i.e. is it possible to upload writing samples? feed specific details?)

House-keeping tasks

  1. Stored the full insights presentation on Dovetail, our research repository, shared the link with key stakeholders.
  2. In addition, I shared the TL;DR version with a wider audience in the #userresearch Slack channel.

Brainstorming Workshops

Brainstorming Workshops

Upon presenting the research insights, I also hosted design focused and engineering focused brainstorming workshops for stakeholders to share their ideas on the solutions and get the conversations going.

Together, we identified the "low-hanging fruits" for the Leo's early version, prioritized for the development roadmap.

e.g. to improve Leo's personalization, the short term solution is to provide canned options of tones, length, and actions for users to choose from; the long term solution is to figure out how we can make the model learn and understand unique user's writing style.

Early Version of Leo

A few examples how research insights informed Leo's early version.

Leo example

[Suggested Questions] is designed to help users get started by introducing them to the capabilities of the AI assistant and educating them on how it can be used effectively.


The pre-definable script is a quick and effective short-term solution to enhance user communication preferences and streamline their interactions with the system.

Leo example

Leo example

Leo, the AI assistant, MVP is designed for 2 main user tasks aiming to boost productivity and generate creative ideas

  1. effectively summarize and digest web information
  2. help users ideate and write

What's more

After Leo landed in Nightly (a beta version of the browser), I also conducted a quick round of usability testing on key user tasks, quick improvements were made based on the findings to increase legibility and usability.

After Leo was shipped, I also worked with the product manager to define and set up private analytics to track and measure performance.

Reflection

If given more time, I would have done more in-depth exploration based on occupation, I was informed by the secondary study, would this tool be more valuable to certain occupations than the rest? I feel like the product strategy would’ve been different. But given the time constraint, I believe this was the best approach.

Project Timeline

This research project took roughly 6-7 weeks

Leo example