Viewpoints Lab

Studying how beliefs form, persist, and change—and building tools to help.

Carnegie Mellon University · Department of Social and Decision Sciences

We investigate the psychology of belief—what makes people adopt, maintain, and occasionally revise their views on contested issues. Our work combines rigorous behavioral science with emerging AI technologies to develop scalable interventions for misinformation, polarization, and democratic health.

Research Areas

AI-Assisted Persuasion

Can AI systems engage productively with misinformation? We develop dialogue-based interventions using large language models to address conspiracy theories, vaccine hesitancy, and political misperceptions.

Conspiracy Beliefs

Why do people believe conspiracy theories? We study the cognitive, motivational, and social factors that predict susceptibility—and test interventions that actually work.

Political Psychology

The psychology of ideology, authoritarianism, and polarization. We're especially interested in whether psychological differences between left and right are real or artifacts of how we measure them.

Methods & Measurement

Better measurement for better science. We develop new scales, test existing ones, and advocate for adversarial collaboration as a norm in contested research areas.

Current Projects

Active

DebunkBot

A free, public tool that allows anyone to have a conversation with an AI trained to engage thoughtfully with conspiracy theories. Over 150,000 users and featured in the New York Times, Guardian, and Wall Street Journal.

Try DebunkBot
Active

Real-Time Conspiracy Intervention

Testing whether AI dialogues can reduce belief in conspiracy theories as they emerge and spread—before they become entrenched. Funded by DARPA.

Active

Vaccine Dialogue

Personalized AI conversations that address parents' specific concerns about vaccines (HPV, COVID, childhood immunizations). Funded by the Physicians Foundation.

Active

AI Deep Canvassing

Can AI replicate the remarkable effects of human deep canvassing conversations? We're testing this for political attitudes and policy preferences.

Active

Climate Dialogue

Addressing climate skepticism and inaction through personalized AI conversations that meet people where they are.

Active

AI Safety & Persuasion

Evaluating the persuasive capabilities of frontier LLMs and developing safeguards against misuse. Funded by Schmidt Sciences.

Lab Members

The lab is new and growing. Check back for updates!

TC

Thomas Costello

Director

?

Your Name Here

Prospective Member

Interested in joining?

I'm recruiting PhD students, postdocs, and research assistants who are excited about the intersection of AI and behavioral science. I value intellectual curiosity, methodological rigor, and a collaborative spirit.

Email me CMU SDS PhD Program

Key Collaborators

David Rand (Cornell) · Gordon Pennycook (Cornell) · Hause Lin (MIT)

Funding & Support

DARPA

$329,485

Schmidt Sciences

$298,925

CSET

$800,000

Physicians Foundation

$150,000

Anti-Defamation League

$40,000