Fake News Detection Using Explainable Artificial Intelligence
Authors: Singh, S.K., Assi, S., Ginige, T., Mohammed, A.H., B.Wahit, F. and Al-Jumeily OBE, D.
Volume: 257
Pages: 477-488
DOI: 10.1007/978-981-96-7749-8_31
Abstract:Social media have become very popular over the last few years among individuals globally. Social media allowed sharing of vast information in minimal time crossing geographical boundaries and languages limitations. However, the disadvantage of social media led to the rise of fake news that annexed each and every discipline whether related to healthcare, environment or societal disciplines. As these news emerge at rapid pace, it is important to develop rapid algorithms for prospectively identifying them. Artificial intelligence (AI) algorithms, and more specifically large language models (LLMs), have addressed the aforementioned limitation as they offered in-depth insight into classification of fake versus real news. Yet, one issue was often reported with LLMs and that was related to them being ‘Black Box’ models that do not justify to the reader how decisions were achieved and why. Subsequently, explainable artificial intelligence (xAI) emerged and that provided transparency, interpretability and justification to decisions made by AI models. Therefore, this work explored using xAI for identification of fake news obtained from a well-known dataset, i.e. the LIAR dataset. The work utilised Distilled Bidirectional Encoder Representations from Transformers (distBERT), a type of LLM that is able to achieve accuracy with low computational power or complicated system requirements. Explainability was then applied using two explainability functions being Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Addictive Explanations (Shapley). The latter two explainability functions showed significant features that played a role in classification of fake news. In summary, the findings highlighted the efficiency of xAI in classifying fake news.
Source: Scopus