Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems

Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems

Chuan, C. H., Ruoyu Sun, andTian, S. (2025, August). “Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems,” paper presented at the Association for Education in Journalism and Mass Communication (AEJMC) Annual Conference, San Francisco, CA.

Abstract: This study examines how the popular eXplainable AI (XAI) approach of decision trees and the feedback mechanism allowing users to challenge AI’s decisions may influence the perceptions of justice among imposed users—those affected by AI decisions without their choice. In the context of an AI-powered loan approval system, the results demonstrate that XAI effectively enhances perceptions of informational and procedural justice toward the system, even when users are denied their loan application. Furthermore, giving users the opportunity to appeal the AI system’s denial decision significantly improves their perceptions of both procedural and informational justice.

Related Research