How the different explanation classes impact trust calibration: The case of clinical decision support systems

Authors: Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R.

Journal: International Journal of Human Computer Studies

Volume: 169

eISSN: 1095-9300

ISSN: 1071-5819

DOI: 10.1016/j.ijhcs.2022.102941

Abstract:

Machine learning has made rapid advances in safety-critical applications, such as traffic control, finance, and healthcare. With the criticality of decisions they support and the potential consequences of following their recommendations, it also became critical to provide users with explanations to interpret machine learning models in general, and black-box models in particular. However, despite the agreement on explainability as a necessity, there is little evidence on how recent advances in eXplainable Artificial Intelligence literature (XAI) can be applied in collaborative decision-making tasks, i.e., human decision-maker and an AI system working together, to contribute to the process of trust calibration effectively. This research conducts an empirical study to evaluate four XAI classes for their impact on trust calibration. We take clinical decision support systems as a case study and adopt a within-subject design followed by semi-structured interviews. We gave participants clinical scenarios and XAI interfaces as a basis for decision-making and rating tasks. Our study involved 41 medical practitioners who use clinical decision support systems frequently. We found that users perceive the contribution of explanations to trust calibration differently according to the XAI class and to whether XAI interface design fits their job constraints and scope. We revealed additional requirements on how explanations shall be instantiated and designed to help a better trust calibration. Finally, we build on our findings and present guidelines for designing XAI interfaces.

https://eprints.bournemouth.ac.uk/37728/

Source: Scopus

How the different explanation classes impact trust calibration: The case of clinical decision support systems

Authors: Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R.

Journal: INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES

Volume: 169

eISSN: 1095-9300

ISSN: 1071-5819

DOI: 10.1016/j.ijhcs.2022.102941

https://eprints.bournemouth.ac.uk/37728/

Source: Web of Science (Lite)

How the different explanation classes impact trust calibration: The case of clinical decision support systems

Authors: Jiang, N., Ali, R., Naiseh, M. and Al-Thani, D.

Journal: International Journal of Human-Computer Studies

Publisher: Academic Press

ISSN: 1071-5819

DOI: 10.1016/j.ijhcs.2022.102941

https://eprints.bournemouth.ac.uk/37728/

Source: Manual

How the different explanation classes impact trust calibration: The case of clinical decision support systems

Authors: Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R.

Journal: International Journal of Human Computer Studies

Volume: 169

Publisher: Academic Press

ISSN: 1071-5819

Abstract:

Machine learning has made rapid advances in safety-critical applications, such as traffic control, finance, and healthcare. With the criticality of decisions they support and the potential consequences of following their recommendations, it also became critical to provide users with explanations to interpret machine learning models in general, and black-box models in particular. However, despite the agreement on explainability as a necessity, there is little evidence on how recent advances in eXplainable Artificial Intelligence literature (XAI) can be applied in collaborative decision-making tasks, i.e., human decision-maker and an AI system working together, to contribute to the process of trust calibration effectively. This research conducts an empirical study to evaluate four XAI classes for their impact on trust calibration. We take clinical decision support systems as a case study and adopt a within-subject design followed by semi-structured interviews. We gave participants clinical scenarios and XAI interfaces as a basis for decision-making and rating tasks. Our study involved 41 medical practitioners who use clinical decision support systems frequently. We found that users perceive the contribution of explanations to trust calibration differently according to the XAI class and to whether XAI interface design fits their job constraints and scope. We revealed additional requirements on how explanations shall be instantiated and designed to help a better trust calibration. Finally, we build on our findings and present guidelines for designing XAI interfaces.

https://eprints.bournemouth.ac.uk/37728/

Source: BURO EPrints

Preferred by: Nan Jiang