Nudging through Friction: an Approach for Calibrating Trust in Explainable AI

Authors: Naiseh, M., Al-Mansoori, R.S., Al-Thani, D., Jiang, N. and Ali, R.

Journal: Proceedings of 2021 8th IEEE International Conference on Behavioural and Social Computing, BESC 2021

ISBN: 9781665400237

DOI: 10.1109/BESC53957.2021.9635271

Abstract:

Explainability has become an essential requirement for safe and effective collaborative Human-AI environments, especially when generating recommendations through black-box modality. One goal of eXplainable AI (XAI) is to help humans calibrate their trust while working with intelligent systems, i.e., avoid situations where human decision-makers over-trust the AI when it is incorrect, or under-trust the AI when it is correct. XAI, in this context, aims to help humans understand AI reasoning and decide whether to follow or reject its recommendations. However, recent studies showed that users, on average, continue to overtrust (or under-trust) AI recommendations which is an indication of XAI's failure to support trust calibration. Such a failure to aid trust calibration was due to the assumption that XAI users would cognitively engage with explanations and interpret them without bias. In this work, we hypothesize that XAI interaction design can play a role in helping users' cognitive engagement with XAI and consequently enhance trust calibration. To this end, we propose friction as a Nudge-based approach to help XAI users to calibrate their trust in AI and present the results of a preliminary study of its potential in fulfilling that role.

https://eprints.bournemouth.ac.uk/38675/

Source: Scopus

Nudging through Friction: an Approach for Calibrating Trust in Explainable AI

Authors: Naiseh, M., Al-Mansoori, R.S., Al-Thani, D., Jiang, N. and Ali, R.

Conference: IEEE International Conference on Behavioural and Social Computing

Dates: 29-31 October 2021

Journal: Proceedings of 2021 8th IEEE International Conference on Behavioural and Social Computing, BESC 2021

ISBN: 9781665400237

DOI: 10.1109/BESC53957.2021.9635271

Abstract:

Explainability has become an essential requirement for safe and effective collaborative Human-AI environments, especially when generating recommendations through black-box modality. One goal of eXplainable AI (XAI) is to help humans calibrate their trust while working with intelligent systems, i.e., avoid situations where human decision-makers over-trust the AI when it is incorrect, or under-trust the AI when it is correct. XAI, in this context, aims to help humans understand AI reasoning and decide whether to follow or reject its recommendations. However, recent studies showed that users, on average, continue to overtrust (or under-trust) AI recommendations which is an indication of XAI's failure to support trust calibration. Such a failure to aid trust calibration was due to the assumption that XAI users would cognitively engage with explanations and interpret them without bias. In this work, we hypothesize that XAI interaction design can play a role in helping users' cognitive engagement with XAI and consequently enhance trust calibration. To this end, we propose friction as a Nudge-based approach to help XAI users to calibrate their trust in AI and present the results of a preliminary study of its potential in fulfilling that role.

https://eprints.bournemouth.ac.uk/38675/

Source: Manual

Preferred by: Nan Jiang

Nudging through Friction: an Approach for Calibrating Trust in Explainable AI

Authors: Naiseh, M., Al-Mansoori, R.S., Al-Thani, D., Jiang, N. and Ali, R.

Publisher: IEEE

Place of Publication: New York

ISBN: 9781665400237

Abstract:

Explainability has become an essential requirement for safe and effective collaborative Human-AI environments, especially when generating recommendations through black-box modality. One goal of eXplainable AI (XAI) is to help humans calibrate their trust while working with intelligent systems, i.e., avoid situations where human decision-makers over-trust the AI when it is incorrect, or under-trust the AI when it is correct. XAI, in this context, aims to help humans understand AI reasoning and decide whether to follow or reject its recommendations. However, recent studies showed that users, on average, continue to overtrust (or under-trust) AI recommendations which is an indication of XAI's failure to support trust calibration. Such a failure to aid trust calibration was due to the assumption that XAI users would cognitively engage with explanations and interpret them without bias. In this work, we hypothesize that XAI interaction design can play a role in helping users' cognitive engagement with XAI and consequently enhance trust calibration. To this end, we propose friction as a Nudge-based approach to help XAI users to calibrate their trust in AI and present the results of a preliminary study of its potential in fulfilling that role.

https://eprints.bournemouth.ac.uk/38675/

Source: BURO EPrints