Context-specific sampling method for contextual explanations

Authors: Madhikermi, M., Malhi, A. and Främling, K.

Journal: ESANN 2021 Proceedings - 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning

Pages: 593-598

ISBN: 9782875870827

DOI: 10.14428/esann/2021.ES2021-124


Explaining the result of machine learning models is an active research topic in Artificial Intelligence (AI) domain with an objective to provide mechanisms to understand and interpret the results of the underlying black-box model in a human-understandable form. With this objective, several eXplainable Artificial Intelligence (XAI) methods have been designed and developed based on varied fundamental principles. Some methods such as Local interpretable model agnostic explanations (LIME), SHAP (SHapley Additive exPlanations) are based on the surrogate model while others such as Contextual Importance and Utility (CIU) do not create or rely on the surrogate model to generate its explanation. Despite the difference in underlying principles, these methods use different sampling techniques such as uniform sampling, weighted sampling for generating explanations. CIU, which emphasizes a context-aware decision explanation, employs a uniform sampling method for the generation of representative samples. In this research, we target uniform sampling methods which generate representative samples that do not guarantee to be representative in the presence of strong non-linearities or exceptional input feature value combinations. The objective of this research is to develop a sampling method that addresses these concerns. To address this need, a new adaptive weighted sampling method has been proposed. In order to verify its efficacy in generating explanations, the proposed method has been integrated with CIU, and tested by deploying the special test case.

Source: Scopus