Congratulations to my PhD student, Justin Cheung, for leading and publishing our paper on “Explainable AI and Trust: How News Media Shapes Public Support for AI-powered Autonomous Passenger Drones” in the journal, Public Understanding of Science. This is a publication out of our bigger Project Descartes, that aims to examine the social implications of hybrid AI applications.
Abstract:
This study delves into the intricate relationships between news media attention in AI, perceived explainability, trust, and public support for autonomous passenger drones. We found significant associations between perceived explainability, that is, the perception of AI explainability prior to interaction with the AI system, on all trust in AI dimensions (i.e., performance, purpose, process). Additionally, we revealed that the public acquired perceived explainability through news media attention in AI. Consequently, we also found that when the public pondered upon support for autonomous passenger drones, only the trust in performance dimension was relevant. The findings underscore the importance of ensuring AI’s explainability and highlight the pivotal role of news media in shaping public perceptions in emerging AI technologies. Theoretical and practical implications are discussed.