Explainable Recommendations and Calibrated Trust - Research Protocol
Authors: Naiseh, M.
Publisher: Bournemouth UniversityAbstract:
Calibrated trust has become an important design goal when designing Human-AI collaborative decision-making tools. It refers to a successful understandability, reliability and predictability to the AI-based tool behaviour and recommendations. Explainable AI is an emerging field where explanations accompany AI-based recommendations to help the human-decision maker understand, rely on, and predict AI behaviour. Such an approach is supposed to improve humans’ trust calibration while working collaboratively with an AI. However, evidence from the literature suggests that explanations have not contributed to improved trust calibration and even introduced other errors. Designers of such explainable systems often assumed that humans would engage cognitively with AIbased explanations and use them in their Human-AI collaborative decision-making task. This research explores users’ behaviour and interaction style with AI-based explanations during a Human-AI collaborative decision-making task. Such an investigation will help further studies address design solutions for AI explanations to enhance trust calibration and operationalize explainability during a Human-AI decision-making task.