Explainability Design Patterns in Clinical Decision Support Systems

Authors: Naiseh, M.

Journal: Lecture Notes in Business Information Processing

Volume: 385 LNBIP

Pages: 613-620

eISSN: 1865-1356

ISBN: 9783030503154

ISSN: 1865-1348

DOI: 10.1007/978-3-030-50316-1_45

Abstract:

This paper reports on the ongoing PhD project in the field of explaining the clinical decision support systems (CDSSs) recommendations to medical practitioners. Recently, the explainability research in the medical domain has witnessed a surge of advances with a focus on two main methods: The first focuses on developing models that are explainable and transparent in its nature (e.g. rule-based algorithms). The second investigates the interpretability of the black-box models without looking at the mechanism behind it (e.g. LIME) as a post-hoc explanation. However, overlooking the human-factors and the usability aspect of the explanation introduced new risks following the system recommendations, e.g. over-trust and under-trust. Due to such limitation, there is a growing demand for usable explanations for CDSSs to enable the integration of trust calibration and informed decision-making in these systems by identifying when the recommendation is correct to follow. This research aims to develop explainability design patterns with the aim of calibrating medical practitioners trust in the CDSSs. This paper concludes the PhD methodology and literature around the research problem is also discussed.

https://eprints.bournemouth.ac.uk/34804/

Source: Scopus

Explainability Design Patterns in Clinical Decision Support Systems

Authors: Naiseh, M.

Journal: RESEARCH CHALLENGES IN INFORMATION SCIENCE (RCIS 2020)

Volume: 385

Pages: 613-620

eISSN: 1865-1356

ISBN: 978-3-030-50315-4

ISSN: 1865-1348

DOI: 10.1007/978-3-030-50316-1_45

https://eprints.bournemouth.ac.uk/34804/

Source: Web of Science (Lite)

Explainability Design Patterns in Clinical Decision Support Systems

Authors: Naiseh, M.

Conference: The 14th International Conference on Research Challenges in Information Science

Dates: 23-25 September 2020

Journal: Proceedings - International Conference on Research Challenges in Information Science

ISSN: 2151-1349

DOI: 10.1007/978-3-030-50316-1_45

Abstract:

This paper reports on the ongoing PhD project in the field of explaining the clinical decision support systems (CDSSs) recommendations to medical practitioners. Recently, the explainability research in the medical domain has witnessed a surge of advances with a focus on two main methods: The first focuses on developing models that are explainable and transparent in its nature (e.g. rule-based algorithms). The second investigates the interpretability of the black-box models without looking at the mechanism behind it (e.g. LIME) as a post-hoc explanation. However, overlooking the human-factors and the usability aspect of the explanation introduced new risks following the system recommendations, e.g. over-trust and under-trust. Due to such limitation, there is a growing demand for usable explanations for CDSSs to enable the integration of trust calibration and informed decision-making in these systems by identifying when the recommendation is correct to follow. This research aims to develop explainability design patterns with the aim of calibrating medical practitioners trust in the CDSSs. This paper concludes the PhD methodology and literature around the research problem is also discussed.

https://eprints.bournemouth.ac.uk/34804/

Source: Manual

Explainability Design Patterns in Clinical Decision Support Systems

Authors: Naiseh, M.

Conference: The 14th International Conference on Research Challenges in Information Science. Proceedings

Pages: 613-620

ISSN: 1865-1348

Abstract:

This paper reports on the ongoing PhD project in the field of explaining the clinical decision support systems (CDSSs) recommendations to medical practitioners. Recently, the explainability research in the medical domain has witnessed a surge of advances with a focus on two main methods: The first focuses on developing models that are explainable and transparent in its nature (e.g. rule-based algorithms). The second investigates the interpretability of the black-box models without looking at the mechanism behind it (e.g. LIME) as a post-hoc explanation. However, overlooking the human-factors and the usability aspect of the explanation introduced new risks following the system recommendations, e.g. over-trust and under-trust. Due to such limitation, there is a growing demand for usable explanations for CDSSs to enable the integration of trust calibration and informed decision-making in these systems by identifying when the recommendation is correct to follow. This research aims to develop explainability design patterns with the aim of calibrating medical practitioners trust in the CDSSs. This paper concludes the PhD methodology and literature around the research problem is also discussed.

https://eprints.bournemouth.ac.uk/34804/

http://www.rcis-conf.com/rcis2020/

Source: BURO EPrints