Explainable Recommendations and Calibrated Trust: Two Systematic User Errors
Authors: Naiseh, M., Cemiloglu, D., Al Thani, D., Jiang, N. and Ali, R.
Journal: Computer
Volume: 54
Issue: 10
Pages: 28-37
eISSN: 1558-0814
ISSN: 0018-9162
DOI: 10.1109/MC.2021.3076131
Abstract:The increased adoption of collaborative human-artificial intelligence decision-making tools triggered a need to explain recommendations for safe and effective collaboration. We explore how users interact with explanations and why trust-calibration errors occur, taking clinical decision-support systems as a case study.
https://eprints.bournemouth.ac.uk/35465/
Source: Scopus
Explainable Recommendations and Calibrated Trust: Two Systematic User Errors
Authors: Naiseh, M., Cemiloglu, D., Al-Thani, D., Jiang, N. and Ali, R.
Journal: COMPUTER
Volume: 54
Issue: 10
Pages: 28-37
eISSN: 1558-0814
ISSN: 0018-9162
DOI: 10.1109/MC.2021.3076131
https://eprints.bournemouth.ac.uk/35465/
Source: Web of Science (Lite)
Explainable recommendations and calibrated trust: two systematic users’ errors
Authors: Naiseh, M., Cemiloglu, D., Althani, D., Jiang, N. and Ali, R.
Journal: Computer
Publisher: IEEE
ISSN: 0018-9162
Abstract:The increased adoption of collaborative Human-AI decision-making tools triggered a need to explain the recommendations for safe and effective collaboration. However, evidence from the recent literature showed that current implementation of AI explanations is failing to achieve adequate trust calibration. Such failure has lead decision-makers to either end-up with over-trust, e.g., people follow incorrect recommendations or under-trust, they reject a correct recommendation. In this paper, we explore how users interact with explanations and why trust calibration errors occur. We take clinical decision-support systems as a case study. Our empirical investigation is based on think-aloud protocol and observations, supported by scenarios and decision-making exercise utilizing a set of explainable recommendations interfaces. Our study involved 16 participants from medical domain who use clinical decision support systems frequently. Our findings showed that participants had two systematic errors while interacting with the explanations either by skipping them or misapplying them in their task.
https://eprints.bournemouth.ac.uk/35465/
Source: Manual
Explainable recommendations and calibrated trust: two systematic users’ errors
Authors: Naiseh, M., Cemiloglu, D., Jiang, N., Althani, D. and Ali, R.
Journal: Computer
Volume: 54
Issue: 10
Pages: 28-37
ISSN: 0018-9162
Abstract:The increased adoption of collaborative Human-AI decision-making tools triggered a need to explain the recommendations for safe and effective collaboration. However, evidence from the recent literature showed that current implementation of AI explanations is failing to achieve adequate trust calibration. Such failure has lead decision-makers to either end-up with over-trust, e.g., people follow incorrect recommendations or under-trust, they reject a correct recommendation. In this paper, we explore how users interact with explanations and why trust calibration errors occur. We take clinical decision-support systems as a case study. Our empirical investigation is based on think-aloud protocol and observations, supported by scenarios and decision-making exercise utilizing a set of explainable recommendations interfaces. Our study involved 16 participants from medical domain who use clinical decision support systems frequently. Our findings showed that participants had two systematic errors while interacting with the explanations either by skipping them or misapplying them in their task.
https://eprints.bournemouth.ac.uk/35465/
https://www.computer.org/csdl/magazine/co/2021/10/09548016/1x9TFwLNTgs
Source: BURO EPrints