Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems
Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems
Chuan, C. H., Ruoyu Sun, andTian, S. (2025, August). “Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems,” paper presented at the Association for Education in Journalism and Mass Communication (AEJMC) Annual Conference, San Francisco, CA.
Abstract: This study examines how the popular eXplainable AI (XAI) approach of decision trees and the feedback mechanism allowing users to challenge AI’s decisions may influence the perceptions of justice among imposed users—those affected by AI decisions without their choice. In the context of an AI-powered loan approval system, the results demonstrate that XAI effectively enhances perceptions of informational and procedural justice toward the system, even when users are denied their loan application. Furthermore, giving users the opportunity to appeal the AI system’s denial decision significantly improves their perceptions of both procedural and informational justice.
Related Research
-
Role Incongruity and Information Accuracy in AI-Mediated Communication: An Integrated Approach to AI Evaluation and Perception of Gender StereotypeJiwon Kim (Ph.D. student) and Glenna Read, “Role Incongruity and Information Accuracy in AI-Mediated Communication: An Integrated Approach to AI Evaluation and Perception of Gender Stereotype,” paper accepted to the International Communication […]
-
Understanding Users’ Intention to Adopt AI Nutrition Chatbots: Insights from UTAUT2Mao, L., Lu, P., & Mengqi (Maggie) Liao. “Understanding Users’ Intention to Adopt AI Nutrition Chatbots: Insights from UTAUT2,”paper to be presented at the 76th annual ICA conference, June 2026 in Cape […]