Explainable recommendations and calibrated trust: two systematic users’ errors

Authors: Naiseh, M., Cemiloglu, D., Althani, D., Jiang, N. and Ali, R.

Journal: Computer

Publisher: IEEE

ISSN: 0018-9162

Abstract:

The increased adoption of collaborative Human-AI decision-making tools triggered a need to explain the recommendations for safe and effective collaboration. However, evidence from the recent literature showed that current implementation of AI explanations is failing to achieve adequate trust calibration. Such failure has lead decision-makers to either end-up with over-trust, e.g., people follow incorrect recommendations or under-trust, they reject a correct recommendation. In this paper, we explore how users interact with explanations and why trust calibration errors occur. We take clinical decision-support systems as a case study. Our empirical investigation is based on think-aloud protocol and observations, supported by scenarios and decision-making exercise utilizing a set of explainable recommendations interfaces. Our study involved 16 participants from medical domain who use clinical decision support systems frequently. Our findings showed that participants had two systematic errors while interacting with the explanations either by skipping them or misapplying them in their task.

http://eprints.bournemouth.ac.uk/35465/

Source: Manual