Does Consulting AI for a Second Opinion Undermine Patients’ Trust in Their Doctors?

Does Consulting AI for a Second Opinion Undermine Patients’ Trust in Their Doctors?

Chen, C., Sun, Y., Mengqi (Maggie) Liao & Sundar, S.S. (2025, June). “Does Consulting AI for a Second Opinion Undermine Patients’ Trust in Their Doctors?” paper to be presented at the 75th annual conference of the International Communication Association, Denver, June.

Abstract: The rapid diffusion of AI enables doctors to consult an AI system for a second opinion during patient visits. Considering that an AI system may either align with or differ from the doctor’s recommendation, how might AI agreement or disagreement influence patients’ trust in their doctors? To answer this question, we conducted an experiment (N = 135) in which participants interacted with a large language model (LLM)-powered chatbot role-playing a human doctor during a mental health consultation. After the consultation, the doctor offered to consult AI for a second opinion. Results revealed that when patients perceived the doctor as human-like, AI disagreement increased perceived medical uncertainty and reduced perception of the doctor’s professional autonomy, whereas AI agreement increased perceptions of doctor laziness. Theoretical implications for agency negotiation between humans and AI and methodological insights for leveraging LLMs for chatbot design and message manipulation are discussed.

Related Research