Explainable recommendation: when design meets trust calibration

Authors: Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R.

Journal: World Wide Web

Volume: 24

Issue: 5

Pages: 1857-1884

eISSN: 1573-1413

ISSN: 1386-145X

DOI: 10.1007/s11280-021-00916-0

Abstract:

Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.

https://eprints.bournemouth.ac.uk/35888/

Source: Scopus

Explainable recommendation: when design meets trust calibration.

Authors: Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R.

Journal: World Wide Web

Volume: 24

Issue: 5

Pages: 1857-1884

eISSN: 1573-1413

DOI: 10.1007/s11280-021-00916-0

Abstract:

Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.

https://eprints.bournemouth.ac.uk/35888/

Source: PubMed

Explainable recommendation: when design meets trust calibration

Authors: Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R.

Journal: WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS

Volume: 24

Issue: 5

Pages: 1857-1884

eISSN: 1573-1413

ISSN: 1386-145X

DOI: 10.1007/s11280-021-00916-0

https://eprints.bournemouth.ac.uk/35888/

Source: Web of Science (Lite)

Explainable Recommendation: When Design meets Trust Calibration

Authors: Naiseh, M., Althani, D., Jiang, N. and Ali, R.

Journal: World Wide Web

Publisher: Springer Nature

ISSN: 1386-145X

Abstract:

Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.

https://eprints.bournemouth.ac.uk/35888/

Source: Manual

Explainable recommendation: when design meets trust calibration.

Authors: Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R.

Journal: World wide web

Volume: 24

Issue: 5

Pages: 1857-1884

eISSN: 1573-1413

ISSN: 1386-145X

DOI: 10.1007/s11280-021-00916-0

Abstract:

Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.

https://eprints.bournemouth.ac.uk/35888/

Source: Europe PubMed Central

Explainable Recommendation: When Design meets Trust Calibration

Authors: Naiseh, M., Althani, D., Jiang, N. and Ali, R.

Journal: World Wide Web

Volume: 24

Pages: 1857-1884

ISSN: 1386-145X

Abstract:

Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.

https://eprints.bournemouth.ac.uk/35888/

Source: BURO EPrints