Fatima Alqabandi
Computational Social Scientist | PhD, Sociology | MS, Statistical Science
fatima.alqabandi[at]duke.edu
Hi! I’m a computational social scientist with a Ph.D. in Sociology and an M.S. in Statistical Science from Duke University. I run large-scale experiments, surveys, and custom-built digital platforms to study how platform design, algorithms, and social dynamics shape online discourse and user experience. I’m especially interested in the gap between what platforms promise (control, safety, transparency) and how people actually interpret and respond to those features in practice.
A lot of my work sits at the intersection of human–AI interaction, human and political behavior, and applied causal inference. I love building innovative research tools that make it possible to test questions that are otherwise hard or impossible to study on real platforms, particularly questions about expression, trust, user experiences, and how people make sense of algorithmic systems.
Research highlights
Click a section below to expand and read more.
Platform control, algorithms, and user experience
Social media platforms increasingly offer users tools to “manage” what they see (e.g., topic controls and customizable feeds). But we know surprisingly little about how these features affect perception and trust.
In a survey-experiment (forthcoming at New Media & Society; recipient of Duke’s 2025 MS in Statistical Science Award), we offered participants the option to filter out toxic political content in a simulated social media feed—while holding the content constant across conditions. Participants who opted into filtering perceived posts as more hostile than those who had no filtering option, even though all participants saw the same content.
Self-censorship, disclosure, and “outnumbered” environments
I study when people share what they really think—and when they keep quiet—especially under social pressure.
- In one behavioral study, I found Democrats voiced unpopular opinions more readily to Republicans than to fellow Democrats, suggesting that self-censorship often reflects in-group dynamics, not just fear of the opposition.
- In an “outnumbered” experiment using our custom platform, participants surrounded by opposing viewpoints felt less comfortable sharing opinions and evaluated both the platform and other users more negatively (paper here).
Gender and influence in political conversations
In a collaboration published in Nature Scientific Reports (2023), we built a custom chat app and randomly assigned gender labels to discussion partners. Misrepresenting a man as a woman reduced his influence, while misrepresenting a woman as a man did not improve hers.
Research tools
A lot of my work depends on building research infrastructure that lets us manipulate key features of online environments while measuring real behavior.
Click a section below to expand and read more.
Social Media Accelerator (SMA)
This is a controlled, Twitter-like experimental environment populated with dynamic LLM-based confederates (overview paper). We use it to study disclosure, social influence, and user experience under tightly controlled conditions.
Qualtrics-integrated LLM chat app
This is a tool that embeds an LLM-powered chat directly inside surveys, so participants can interact with synthetic partners without leaving Qualtrics (GitHub repo).
Applied AI tools for real workflows
I also build LLM-powered systems outside academia. These include things like an LLM based chat app that helps doctors interact with, and gain basic statistical info from, messy ICU data in a conversational way; and a chat-based assistant to help local residents navigate complex government services and requirements.
Interests
- Experiments, surveys, and computational social science
- Causal inference and quantitative methods
- Human–AI interaction
- Public opinion, belief disclosure, and self-censorship
- Platform design, moderation, and user experience
- Social network analysis
- And generally designing studies that help answer exciting research questions!