Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems
Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems
Chuan, C. H., Ruoyu Sun, andTian, S. (2025, August). “Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems,” paper presented at the Association for Education in Journalism and Mass Communication (AEJMC) Annual Conference, San Francisco, CA.
Abstract: This study examines how the popular eXplainable AI (XAI) approach of decision trees and the feedback mechanism allowing users to challenge AI’s decisions may influence the perceptions of justice among imposed users—those affected by AI decisions without their choice. In the context of an AI-powered loan approval system, the results demonstrate that XAI effectively enhances perceptions of informational and procedural justice toward the system, even when users are denied their loan application. Furthermore, giving users the opportunity to appeal the AI system’s denial decision significantly improves their perceptions of both procedural and informational justice.
Related Research
-
Harmonization of Policies, Digital Infrastructure, and the South Korean Media Industry in the Age of Artificial IntelligenceBenjamin Han received a $10,000 grant for his project “Harmonization of Policies, Digital Infrastructure, and the South Korean Media Industry in the Age of Artificial Intelligence” from the Carsey-Wolf Center (CWC) […]
-
Health Persuasion Enablers and Barriers, Intertwined: Uncovering the Roles of Trust, Risk Tolerance and Message Fatigue among Australian AdultsHyoyeun Jun (PhD alum), Amisha Mehta, Youngji Seo (PhD alum), Yan Jin, and Jacob Riley. (2025). “Health Persuasion Enablers and Barriers, Intertwined: Uncovering the Roles of Trust, Risk Tolerance and Message Fatigue […]