When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems
When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems
Chen, C., Mengqi (Maggie) Liao, & Sundar, S. S. (2024). When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems. Proceedings of the Second International Symposium on Trustworthy Autonomous Systems, 1–17. https://doi.org/10.1145/3686038.3686066
Abstract: Explanations are believed to aid understanding of AI models, but do they affect users’ perceptions and trust in AI, especially in the presence of algorithmic bias? If so, when should explanations be provided to optimally balance explainability and usability? To answer these questions, we conducted a user study (N = 303) exploring how explanation timing influences users’ perception of trust calibration, understanding of the AI system, and user experience and user interface satisfaction under both biased and unbiased AI performance conditions. We found that pre-explanations seem most valuable when the AI shows bias in its performance, whereas post-explanations appear more favorable when the system is bias-free. Showing both pre-and post-explanations tends to result in higher perceived trust calibration regardless of bias, despite concerns about content redundancy. Implications for designing socially responsible, explainable, and trustworthy AI interfaces are discussed.
Related Research
-
Humor in Risk CommunicationHye Jin Yoon, “Humor in Risk Communication.” Invited Zoom lecture to the Children’s Environmental Health Research and Translation (CEHRT) network, September 24, 2024.
-
Increasing Effectiveness of Green Demarketing Campaigns for Sustainable Fashion Brands Using the SHIFT FrameworkYoon, Hye Jin, Yoon-Joo Lee, and Ja Kyung Seo (Ph.D. Student), “Increasing Effectiveness of Green Demarketing Campaigns for Sustainable Fashion Brands Using the SHIFT Framework.” Paper presented at the Global Fashion […]