Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems
Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems
Chuan, C. H., Ruoyu Sun, andTian, S. (2025, August). “Human-Centered Artificial Intelligence: The Effects of Explanation and User Feedback on Justice Perceptions Toward AI Systems,” paper presented at the Association for Education in Journalism and Mass Communication (AEJMC) Annual Conference, San Francisco, CA.
Abstract: This study examines how the popular eXplainable AI (XAI) approach of decision trees and the feedback mechanism allowing users to challenge AI’s decisions may influence the perceptions of justice among imposed users—those affected by AI decisions without their choice. In the context of an AI-powered loan approval system, the results demonstrate that XAI effectively enhances perceptions of informational and procedural justice toward the system, even when users are denied their loan application. Furthermore, giving users the opportunity to appeal the AI system’s denial decision significantly improves their perceptions of both procedural and informational justice.
Related Research
-
The Effectiveness of Comment Sidedness on Inoculating Consumer Attitudes and Intentions Toward Social Media AdvertisingBen Libon (Ph.D. Student) and Nathaniel Evans, “The Effectiveness of Comment Sidedness on Inoculating Consumer Attitudes and Intentions Toward Social Media Advertising”. Paper Accepted to the American Academy of Advertising […]
-
Hidden in Plain Sight: Understanding Consumer Responses and Ad Recognition of Sponsored Content in Search Engine Gen-AI OverviewsMaggie Liao, Yuan Sun, and Nathaniel Evans, “Hidden in Plain Sight: Understanding Consumer Responses and Ad Recognition of Sponsored Content in Search Engine Gen-AI Overviews”. Paper Accepted to the American […]