Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems
Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems
Chuan, C. H., Ruoyu Sun, andTian, S. (2025, August). “Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems,” paper presented at the Association for Education in Journalism and Mass Communication (AEJMC) Annual Conference, San Francisco, CA.
Abstract: This study examines how the popular eXplainable AI (XAI) approach of decision trees and the feedback mechanism allowing users to challenge AI’s decisions may influence the perceptions of justice among imposed users—those affected by AI decisions without their choice. In the context of an AI-powered loan approval system, the results demonstrate that XAI effectively enhances perceptions of informational and procedural justice toward the system, even when users are denied their loan application. Furthermore, giving users the opportunity to appeal the AI system’s denial decision significantly improves their perceptions of both procedural and informational justice.
Related Research
-
Anatomy of Governance: An Inquiry into the Hidden Foundation of Crisis ManagementYan Jin (Chair). “Anatomy of Governance: An Inquiry into the Hidden Foundation of Crisis Management.” Accepted for Panel Session at BledCom (The 33rd International Public Relations Research Symposium), June 26-27, 2026, […]
-
Issue support, identity fit, and moral evaluation: Examining consumer responses to CEO advocacy on contested issuesZifei Fay Chen, Xu, D., Tao, W., & Jingyuan Kong (Grady Ph.D. student) (March 2026). Issue support, identity fit, and moral evaluation: Examining consumer responses to CEO advocacy on contested issues. […]