DebunkBot
A free, public tool that allows anyone to have a conversation with an AI trained to engage thoughtfully with conspiracy theories. Over 150,000 users and featured in the New York Times, Guardian, and Wall Street Journal.
Try DebunkBotStudying how beliefs form, persist, and change—and building tools to help.
Carnegie Mellon University · Department of Social and Decision Sciences
We investigate the psychology of belief—what makes people adopt, maintain, and occasionally revise their views on contested issues. Our work combines rigorous behavioral science with emerging AI technologies to develop scalable interventions for misinformation, polarization, and democratic health.
Can AI systems engage productively with misinformation? We develop dialogue-based interventions using large language models to address conspiracy theories, vaccine hesitancy, and political misperceptions.
Why do people believe conspiracy theories? We study the cognitive, motivational, and social factors that predict susceptibility—and test interventions that actually work.
The psychology of ideology, authoritarianism, and polarization. We're especially interested in whether psychological differences between left and right are real or artifacts of how we measure them.
Better measurement for better science. We develop new scales, test existing ones, and advocate for adversarial collaboration as a norm in contested research areas.
A free, public tool that allows anyone to have a conversation with an AI trained to engage thoughtfully with conspiracy theories. Over 150,000 users and featured in the New York Times, Guardian, and Wall Street Journal.
Try DebunkBotTesting whether AI dialogues can reduce belief in conspiracy theories as they emerge and spread—before they become entrenched. Funded by DARPA.
Personalized AI conversations that address parents' specific concerns about vaccines (HPV, COVID, childhood immunizations). Funded by the Physicians Foundation.
Can AI replicate the remarkable effects of human deep canvassing conversations? We're testing this for political attitudes and policy preferences.
Addressing climate skepticism and inaction through personalized AI conversations that meet people where they are.
Evaluating the persuasive capabilities of frontier LLMs and developing safeguards against misuse. Funded by Schmidt Sciences.
The lab is new and growing. Check back for updates!
Director
Prospective Member
I'm recruiting PhD students, postdocs, and research assistants who are excited about the intersection of AI and behavioral science. I value intellectual curiosity, methodological rigor, and a collaborative spirit.
David Rand (Cornell) · Gordon Pennycook (Cornell) · Hause Lin (MIT)
DARPA
$329,485
Schmidt Sciences
$298,925
CSET
$800,000
Physicians Foundation
$150,000
Anti-Defamation League
$40,000