C-XAI: Design Method for Explainable AI Interfaces to Enhance Trust Calibration.
Authors: Naiseh, M.
Conference: Bournemouth University, Faculty of Science and Technology
Abstract:Human-AI collaborative decision-making tools are on an accelerated rise in several critical application domains, such as healthcare and military sectors. It is often difficult for users of such systems to understand the AI reasoning and output, particularly when the underlying algorithm and logic are hidden and treated as a black-box for commercial sensitivity and also for challenges of its explainability. Lack of explainability and opacity of the underlying algorithms can perpetuate justice and bias and decrease users’ acceptance and satisfaction. Integrating eXplainable AI (XAI) into AI-based decision-making tools has become a crucial requirement for a safe and effective human-AI collaborative environment.
Recently, the impact of explainability on trust calibration has become a main research question. The role refers to how explanations and their communication method to help form a correct mental model of the AI-based tool; thus, the human decision-maker is better informed on whether to trust or distrust the AI recommendations. Although studies showed that explanations could improve trust calibration, such studies often assumed that users would engage cognitively with explanations to calibrate their trust. Recent studies showed that even though explanations are communicated to people, trust calibration is not improved. Such failure of XAI systems in enhancing trust calibration has been linked to factors such as cognitive biases, e.g., people can be selective of what they read and rely on. Also, other studies showed that XAI failed to improve calibrated trust due to the inconsistency in properties of XAI methods which are rarely considered in the XAI interfaces design. Overall, users of XAI systems fail, on average, to calibrate their trust, human decision-makers working collaboratively with an AI can still be notably following incorrect recommendations or rejecting correct ones.
This thesis aims to provide C-XAI, a design method expressly tailored to help trust calibration in the XAI interface. The method identifies properties of XAI methods that may introduce trust calibration risks and help produce designs that mitigate these risks. Trust calibration risk is defined in this thesis as a limitation in the interface design that may hinder users’ ability to calibrate their trust. This thesis followed a qualitative research approach with experts, practitioners, and end-users who used AI-based decision-making tools in their work environment. The data collection methods included a literature review, semi-structured interviews, think-aloud sessions, and a co-design approach to develop C-XAI. These data collection methods helped conceptualise various aspects of trust calibration and XAI, including XAI requirements during Human-AI collaborative decision-making tasks, trust calibration risks, and design principles that help trust calibration. The results of these studies were exploited to devise C-XAI. The C-XAI then was evaluated with domain experts and end-users. The evaluation aimed to investigate the effectiveness, completeness, clarity, engagement, and communication between different stakeholders. The evaluation results showed that the method helped stakeholders understand the design problem and develop XAI designs to help trust calibration.
This thesis has four main contributions. First, it conceptualises the trust calibration design problem concerning XAI interface design. Second, it elicits main limitations for XAI interfaces design to support trust calibration. Third, it proposes key design principles that support XAI interface designers to support trust calibration. Finally, the thesis proposes and evaluates the C-XAI design method to guide XAI interface design to enable trust calibration systematically.
https://eprints.bournemouth.ac.uk/36345/
Source: Manual
C-XAI: Design Method for Explainable AI Interfaces to Enhance Trust Calibration.
Authors: Naiseh, M.
Conference: Bournemouth University
Abstract:Human-AI collaborative decision-making tools are on an accelerated rise in several critical application domains, such as healthcare and military sectors. It is often difficult for users of such systems to understand the AI reasoning and output, particularly when the underlying algorithm and logic are hidden and treated as a black-box for commercial sensitivity and also for challenges of its explainability. Lack of explainability and opacity of the underlying algorithms can perpetuate justice and bias and decrease users’ acceptance and satisfaction. Integrating eXplainable AI (XAI) into AI-based decision-making tools has become a crucial requirement for a safe and effective human-AI collaborative environment. Recently, the impact of explainability on trust calibration has become a main research question. The role refers to how explanations and their communication method to help form a correct mental model of the AI-based tool; thus, the human decision-maker is better informed on whether to trust or distrust the AI recommendations. Although studies showed that explanations could improve trust calibration, such studies often assumed that users would engage cognitively with explanations to calibrate their trust. Recent studies showed that even though explanations are communicated to people, trust calibration is not improved. Such failure of XAI systems in enhancing trust calibration has been linked to factors such as cognitive biases, e.g., people can be selective of what they read and rely on. Also, other studies showed that XAI failed to improve calibrated trust due to the inconsistency in properties of XAI methods which are rarely considered in the XAI interfaces design. Overall, users of XAI systems fail, on average, to calibrate their trust, human decision-makers working collaboratively with an AI can still be notably following incorrect recommendations or rejecting correct ones. This thesis aims to provide C-XAI, a design method expressly tailored to help trust calibration in the XAI interface. The method identifies properties of XAI methods that may introduce trust calibration risks and help produce designs that mitigate these risks. Trust calibration risk is defined in this thesis as a limitation in the interface design that may hinder users’ ability to calibrate their trust. This thesis followed a qualitative research approach with experts, practitioners, and end-users who used AI-based decision-making tools in their work environment. The data collection methods included a literature review, semi-structured interviews, think-aloud sessions, and a co-design approach to develop C-XAI. These data collection methods helped conceptualise various aspects of trust calibration and XAI, including XAI requirements during Human-AI collaborative decision-making tasks, trust calibration risks, and design principles that help trust calibration. The results of these studies were exploited to devise C-XAI. The C-XAI then was evaluated with domain experts and end-users. The evaluation aimed to investigate the effectiveness, completeness, clarity, engagement, and communication between different stakeholders. The evaluation results showed that the method helped stakeholders understand the design problem and develop XAI designs to help trust calibration. This thesis has four main contributions. First, it conceptualises the trust calibration design problem concerning XAI interface design. Second, it elicits main limitations for XAI interfaces design to support trust calibration. Third, it proposes key design principles that support XAI interface designers to support trust calibration. Finally, the thesis proposes and evaluates the C-XAI design method to guide XAI interface design to enable trust calibration systematically.
https://eprints.bournemouth.ac.uk/36345/
Source: BURO EPrints