When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems
When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems
Chen, C., Mengqi (Maggie) Liao, & Sundar, S. S. (2024). When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems. Proceedings of the Second International Symposium on Trustworthy Autonomous Systems, 1–17. https://doi.org/10.1145/3686038.3686066
Abstract: Explanations are believed to aid understanding of AI models, but do they affect users’ perceptions and trust in AI, especially in the presence of algorithmic bias? If so, when should explanations be provided to optimally balance explainability and usability? To answer these questions, we conducted a user study (N = 303) exploring how explanation timing influences users’ perception of trust calibration, understanding of the AI system, and user experience and user interface satisfaction under both biased and unbiased AI performance conditions. We found that pre-explanations seem most valuable when the AI shows bias in its performance, whereas post-explanations appear more favorable when the system is bias-free. Showing both pre-and post-explanations tends to result in higher perceived trust calibration regardless of bias, despite concerns about content redundancy. Implications for designing socially responsible, explainable, and trustworthy AI interfaces are discussed.
Related Research
-
“Best Paper/Show Bible” at the 2025 Broadcast Education Association BEA) Festival of Media ArtsMatthew Evans won “Best Paper/Show Bible” at the 2025 Broadcast Education Association BEA) Festival of Media Arts with his show bible for his original TV series THE DEVIL’S RUST. He […]
-
Don’t Tell Mom the Babysitter’s DeadNeil Landau’s remake of “Don’t Tell Mom the Babysitter’s Dead” (2024) was picked up by SHOUT! Studios to distribute across all digital platforms worldwide. Landau is credited as co-writer and […]