Comparison of Contextual Importance and Utility with LIME and Shapley Values
Authors: Främling, K., Westberg, M., Jullum, M., Madhikermi, M. and Malhi, A.
Journal: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume: 12688 LNAI
Pages: 39-54
eISSN: 1611-3349
ISBN: 9783030820169
ISSN: 0302-9743
DOI: 10.1007/978-3-030-82017-6_3
Abstract:Different explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. An explanation that has poor fidelity towards the AI system’s actual behaviour can not be trusted no matter how convincing the explanations appear to be for the users. The Contextual Importance and Utility (CIU) method differs from currently popular outcome explanation methods such as Local Interpretable Model-agnostic Explanations (LIME) and Shapley values in several ways. Notably, CIU does not build any intermediate interpretable model like LIME, and it does not make any assumption regarding linearity or additivity of the feature importance. CIU also introduces the value utility notion and a definition of feature importance that is different from LIME and Shapley values. We argue that LIME and Shapley values actually estimate ‘influence’ (rather than ‘importance’), which combines importance and utility. The paper compares the three methods in terms of validity of their ground truth assumption and fidelity towards the underlying model through a series of benchmark tasks. The results confirm that LIME results tend not to be coherent nor stable. CIU and Shapley values give rather similar results when limiting explanations to ‘influence’. However, by separating ‘importance’ and ‘utility’ elements, CIU can provide more expressive and flexible explanations than LIME and Shapley values.
https://eprints.bournemouth.ac.uk/36356/
Source: Scopus
Comparison of Contextual Importance and Utility with LIME and Shapley Values
Authors: Framling, K., Westberg, M., Jullum, M., Madhikermi, M. and Malhi, A.
Journal: EXPLAINABLE AND TRANSPARENT AI AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2021
Volume: 12688
Pages: 39-54
eISSN: 1611-3349
ISBN: 978-3-030-82016-9
ISSN: 0302-9743
DOI: 10.1007/978-3-030-82017-6_3
https://eprints.bournemouth.ac.uk/36356/
Source: Web of Science (Lite)
Comparison of Contextual Importance and Utility with LIME and Shapley Values
Authors: Främling, K., Westberg, M., Jullum, M., Madhikermi, M. and Malhi, A.
Conference: EXTRAAMAS 2021: Third International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems
Pages: 39-54
ISBN: 9783030820169
ISSN: 0302-9743
Abstract:Different explainable AI (XAI) methods are based on different notions of ‘ground truth’. In order to trust explanations of AI systems, the ground truth has to provide fidelity towards the actual behaviour of the AI system. An explanation that has poor fidelity towards the AI system’s actual behaviour can not be trusted no matter how convincing the explanations appear to be for the users. The Contextual Importance and Utility (CIU) method differs from currently popular outcome explanation methods such as Local Interpretable Model-agnostic Explanations (LIME) and Shapley values in several ways. Notably, CIU does not build any intermediate interpretable model like LIME, and it does not make any assumption regarding linearity or additivity of the feature importance. CIU also introduces the value utility notion and a definition of feature importance that is different from LIME and Shapley values. We argue that LIME and Shapley values actually estimate ‘influence’ (rather than ‘importance’), which combines importance and utility. The paper compares the three methods in terms of validity of their ground truth assumption and fidelity towards the underlying model through a series of benchmark tasks. The results confirm that LIME results tend not to be coherent nor stable. CIU and Shapley values give rather similar results when limiting explanations to ‘influence’. However, by separating ‘importance’ and ‘utility’ elements, CIU can provide more expressive and flexible explanations than LIME and Shapley values.
https://eprints.bournemouth.ac.uk/36356/
Source: BURO EPrints