Understanding Motivations Behind Harmful Content

Role: Quantitative UX Research Intern at Meta (Integrity Team)
Methods: Literature review, survey design & execution, quantitative analysis
Impact: Findings and reusable report informed moderation strategy discussions across product, policy, and engineering teams

Cartoon-style illustration of a young woman in an orange hoodie sitting at a desk, holding a survey form that reads “Why do you post harmful content on Facebook?” She looks thoughtful, with a pen resting on her chin. A computer monitor in the background shows a Facebook-like interface with an angry emoji icon.
Generated by GPT5.


Overview

Harmful content undermines user trust, yet enforcement strategies are often reactive. To improve moderation, the integrity team needed better visibility into why users post harmful content - their motivations, awareness of policies, and social dynamics.


Approach

I co-led the project through two parts:
  1. Literature review: Synthesized academic and industry research into a practical report that guided survey design and became a reusable reference for moderation teams.
  2. Survey study: Designed and launched a large-scale survey targeting users flagged for harmful content (de-identified). To ensure rigor, I crafted neutral survey language, balanced closed- and open-ended questions, and collaborated with data scientists on recruitment strategy. For analysis, I used R to conduct descriptive statistics and paired chi-square tests, while thematically coding open-ended responses. I also quickly ramped up on Meta’s internal quant tools for survey deployment and analysis.

Key Insights


Impact

The literature review and survey findings became a shared resource across integrity and engineering teams. I presented results to PMs, designers, and ML engineers, helping teams reframe assumptions about harmful content and align on moderation strategy.