XAI for Group-AI Interaction: Towards Collaborative and Inclusive Explanations

Authors: Naiseh, M., Webb, C., Underwood, T., Ramchurn, G., Walters, Z., Thavanesan, N. and Vigneswaran, G.

Journal: CEUR Workshop Proceedings

Volume: 3793

Pages: 249-256

ISSN: 1613-0073

Abstract:

The increasing integration of Machine Learning (ML) into decision-making across various sectors has raised concerns about ethics, legality, explainability, and safety, highlighting the necessity of human oversight. In response, eXplainable AI (XAI) has emerged as a means to enhance transparency by providing insights into ML model decisions and offering humans an understanding of the underlying logic. Despite its potential, existing XAI models often lack practical usability and fail to improve human-AI performance, as they may introduce issues such as overreliance. This underscores the need for further research in Human-Centered XAI to improve the usability of current XAI methods. Notably, much of the current research focuses on one-to-one interactions between the XAI and individual decision-makers, overlooking the dynamics of many-to-one relationships in real-world scenarios where groups of humans collaborate using XAI in collective decision-making. In this late-breaking work, we draw upon current work in Human-Centered XAI research and discuss how XAI design could be transitioned to group-AI interaction. We discuss four potential challenges in the transition of XAI from human-AI interaction to group-AI interaction. This paper contributes to advancing the field of Human-Centered XAI and facilitates the discussion on group-XAI interaction, calling for further research in this area.

https://eprints.bournemouth.ac.uk/40531/

Source: Scopus

XAI for Group-AI Interaction: Towards Collaborative and Inclusive Explanations

Authors: Naiseh, M., Webb, C., Underwood, T., Ramchurn, G., Walters, Z., Thavanesan, N. and Vigneswaran, G.

Volume: 3793

Pages: 249-256

ISSN: 1613-0073

Abstract:

The increasing integration of Machine Learning (ML) into decision-making across various sectors has raised concerns about ethics, legality, explainability, and safety, highlighting the necessity of human oversight. In response, eXplainable AI (XAI) has emerged as a means to enhance transparency by providing insights into ML model decisions and offering humans an understanding of the underlying logic. Despite its potential, existing XAI models often lack practical usability and fail to improve human-AI performance, as they may introduce issues such as overreliance. This underscores the need for further research in Human-Centered XAI to improve the usability of current XAI methods. Notably, much of the current research focuses on one-to-one interactions between the XAI and individual decision-makers, overlooking the dynamics of many-to-one relationships in real-world scenarios where groups of humans collaborate using XAI in collective decision-making. In this late-breaking work, we draw upon current work in Human-Centered XAI research and discuss how XAI design could be transitioned to group-AI interaction. We discuss four potential challenges in the transition of XAI from human-AI interaction to group-AI interaction. This paper contributes to advancing the field of Human-Centered XAI and facilitates the discussion on group-XAI interaction, calling for further research in this area.

https://eprints.bournemouth.ac.uk/40531/

Source: BURO EPrints

The data on this page was last updated at 06:17 on November 20, 2024.