Accessible Re-design of MaxDiff Survey

Role: UX Research Intern at Google (Products for All)
Methods: Survey, accessibility testing, experimental design, mixed-method evaluation
Impact: Findings adopted into Google’s company-wide UXR handbook


Overview

UX researchers often rely on MaxDiff surveys to prioritize product features. While statistically powerful, this method creates significant accessibility barriers for blind and low-vision (BLV) users. During my internship, I researched the plausbility of adapting and simplifying traditional MaxDiff to reduce accessibility challenges while still producing rigorous, actionable insights. By combining qualitative usability study sessions, cognitive interviews, a large-scale survey experiment, I demonstrated that a simplified MaxDiff version - best-only MaxDiff - can both improve accessibility and maintain the methodological rigor product teams rely on.


The Challenge

This figure shows an example BWS question on the left side and an example BOS question on the right side. The BWS question asks: "Consider the following possible features to improve your experience on platform X, what are the most and least important features to you?" with a three-column, five-row matrix to indicate possible answer options: in the central column are Feature A, Feature B, Feature C, Feature D, and Feature E; the left column has a header "Most Important", and the right column has a header "Least Important", both containing radio buttons for respondents to indicate a most or least important feature. The BOS question asks: "Consider the following possible features to improve your experience on platform X, what is the most important feature to you?" with five radio button choices: Feature A, Feature B, Feature C, Feature D, and Feature E.
Example question for traditional MaxDiff or best-worst scaling (left) and best-only scaling (right)


My Approach

I designed a three-part research agenda that evaluate BOS in real-world conditions and also helps an anonymous product team to better understand preferences of its disability user base. Each phase was chosen to balance depth, scalability, and methodological rigor:

  1. Qualitative Pre-Test (BLV Users):
    • Why: We started small (8 participants) to capture rich, firsthand feedback before scaling up. This ensured we understood accessibility pain points and usability improvements early.
    • How: 90-minute qualitative, cognitive interview sessions comparing BOS vs BWS, capturing both navigation challenges and clarity of instructions.
  2. Large-Scale Survey (BLV Users):
    • Why: To validate BOS’s performance under typical industry conditions. MaxDiff is valued for its statistical precision at scale; we needed to see if BOS could replicate that.
    • How: Recruited 535 BLV respondents to complete BOS on 30 product features. Measured both experience (accessibility, mental demand, focus) and preference rankings.
  3. Survey Experiment (DHH Users):
    • Why: We wanted to benchmark BOS against BWS with a user group that could access both survey versions.
    • How: Random assignment to BOS vs BWS conditions. Splitting DHH respondents between BOS and BWS let us compare rankings head-to-head. Descrpitive and correlation analysis of resulting rankings.

Key Findings


Impact


Reflection

Through this project, I learned how to: