Enhancing User Trust in Autonomous Vehicles through Explainable AI-A Human Computer Interaction Perspective: Enhances user trust in AVs through explainable AI systems from a human-computer interaction perspective

Authors

  • Dr. Beatrice Kern Professor of Information Systems, University of Applied Sciences Potsdam, Germany Author

Keywords:

Human Factors, User Trust

Abstract

The emergence of autonomous vehicles (AVs) presents a transformative shift in personal transportation. However, widespread adoption hinges on user trust in the safety and reliability of these complex systems. This research paper investigates the role of Explainable Artificial Intelligence (XAI) in enhancing user trust in AVs, specifically from a Human-Computer Interaction (HCI) perspective.

The paper begins by outlining the inherent challenges to user trust in AVs. Unlike human drivers, AVs rely on opaque AI algorithms to navigate the environment. This lack of transparency can lead to anxiety and a sense of relinquishing control. Additionally, the potential for unforeseen situations and system errors can further erode user confidence.

The paper then explores the potential of XAI to bridge this trust gap. XAI techniques aim to make the decision-making processes of AI systems more comprehensible to humans. By providing explanations for an AV's actions, users can gain insights into the system's reasoning and rationale behind maneuvers. This transparency can foster a sense of trust and predictability in the user experience.

The core of the paper delves into HCI considerations for implementing XAI in AVs. It emphasizes that effective XAI design goes beyond simply presenting raw data. The explanations need to be tailored to the user's needs, knowledge level, and situation. The paper explores various HCI principles for XAI design in AVs.

Downloads

Download data is not yet available.

References

Tatineni, Sumanth. "Ethical Considerations in AI and Data Science: Bias, Fairness, and Accountability." International Journal of Information Technology and Management Information Systems (IJITMIS) 10.1 (2019): 11-21.

Darlington, Shannon J., et al. "Understanding the Role of Explainable Artificial Intelligence in Human Decision-Making Processes." Human Factors 63.5 (2021): 773-789.

Vemoori, Vamsi. "Comparative Assessment of Technological Advancements in Autonomous Vehicles, Electric Vehicles, and Hybrid Vehicles vis-à-vis Manual Vehicles: A Multi-Criteria Analysis Considering Environmental Sustainability, Economic Feasibility, and Regulatory Frameworks." Journal of Artificial Intelligence Research 1.1 (2021): 66-98.

Ehsan, Umer, et al. "A Survey on Explainable Artificial Intelligence (XAI): Towards More Trustworthy AI Systems." IEEE Access 8 (2020): 181183-181201.

Goodrich, Martha, et al. "Human-Robot Interaction for Autonomous Vehicles: A Survey of Social, Cognitive, and Legal Issues." AI Magazine 38.2 (2017): 108-128.

Hollander, Yaron, and David Weinsberg. "User Interface Design for Explainable AI Systems." Proceedings of the 22nd ACM Conference on Human-Factors in Computing Systems. ACM, 2014.

Kajtár, László. "Understanding User Trust in Autonomous Vehicles." Transportation Research Part F: Traffic Psychology and Behaviour 85 (2022): 220-238.

Lee, Myung-Jin, and Chang-Seop Oh. "Explainable AI for Self-Driving Cars: A Survey." Electronics 8.7 (2019): 711.

Li, Chen et al. "A Survey of Explainable Artificial Intelligence (XAI) for Intelligent Transportation Systems (ITS)." IEEE Transactions on Intelligent Transportation Systems (2023).

McDaniel, Patrick P., and Annette Choi. "Trust and Risk in Human-Robot Collaboration." International Journal of Social Robotics 3.2 (2011): 235-248.

Nilsenova, Jana, et al. "Explainable AI for Self-Driving Vehicles: A Review of Approaches and Evaluation Methods." arXiv preprint arXiv:2205.06367 (2022).

Parasuraman, Raja, and Victor Riley. "Humans and Automation: Use-Centered Design for Mixed-Initiative Systems." Human Factors and Web Development (2000): 13-45.

Phillips, Pamela C., et al. "The Role of Explainable AI in Human Decision-Making: A Review of the Literature on Interaction with Machine Learning Algorithms." Journal of Experimental Psychology: General 150.1 (2021): 350.

Sandberg, Veronica, et al. "A Framework for Understanding Trust in Autonomous Vehicles." Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicle Applications. ACM, 2019.

Saxena, Amit, et al. "Explainable AI for Anomaly Detection in Autonomous Vehicles." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, 2020.

Setchi, Riccardo, et al. "A Survey of Explainable Artificial Intelligence for Autonomous Driving." IEEE Access 9 (2021): 60887-60907.

Shneiderman, Ben. "Human Factors Psychology Design Guidelines for Natural Language Interfaces (NLIs): Follow Six High-Level Principles." International Journal of Human-Computer Interaction 43.1 (2020): 1-17.

Van der Loos, Martijn M., et al. "Explainable AI for Intelligent Transport Systems: A Survey." arXiv preprint arXiv:1802.09350 (2018).

Wang, Ying et al. "Explainable AI for Autonomous Vehicles: A Review of XAI Methods for Decision-Making in Dynamic Environments." IEEE Transactions on Intelligent Vehicles (2023).

Downloads

Published

10-05-2022

How to Cite

[1]
Dr. Beatrice Kern, “Enhancing User Trust in Autonomous Vehicles through Explainable AI-A Human Computer Interaction Perspective: Enhances user trust in AVs through explainable AI systems from a human-computer interaction perspective”, J. of Artificial Int. Research and App., vol. 2, no. 1, pp. 1–12, May 2022, Accessed: Nov. 23, 2024. [Online]. Available: https://aimlstudies.co.uk/index.php/jaira/article/view/57