Reimagining the web for ears. Most online content assumes visual interaction. With the rise of voice technology, I co-led interview studies to explore how different content types can be translated into audio for navigation and consumption. Findings informed design considerations for multimodal interfaces where hearing and speech complement visual input.
Published: CSCW 2021, CHI 2022
Voice Interaction, Interface Design, Multimodal UX
Age, motor ability, and input performance. Analyzed data from 700+ participants to examine how age and motor ability affect input performance with mouse and touchscreen. Results showed continuous variation rather than clear-cut categories—challenging assumptions that age or disability can directly predict performance.
Published: ASSETS 2020 (Best Paper)
Accessibility, Input Techniques, Aging, Data Analysis
Robotic touch for emotion regulation. Explored how touch and movement in furry robots can convey emotion. Through user studies and interviews, we found people empathize with robot “emotions” under certain narratives—highlighting opportunities and risks in affective computing.
Published: CHI 2018
Haptics, Social Robots, Affective Computing
Making AR/VR testing more accessible. Supported the development and evaluation of MRAT, a tool to support low-cost usability testing for Mixed Reality applications. My contributions included tool development, user study setup, and competitive analysis.
Published: CHI 2020 (Best Paper)
AR/VR, Usability Studies, Prototyping