Personalising explainable recommendations: Literature and conceptualisation
Authors: Naiseh, M., Jiang, N., Ma, J. and Ali, R.
Journal: Advances in Intelligent Systems and Computing
Volume: 1160 AISC
Pages: 518-533
eISSN: 2194-5365
ISSN: 2194-5357
DOI: 10.1007/978-3-030-45691-7_49
Abstract:Explanations in intelligent systems aim to enhance a users’ understandability of their reasoning process and the resulted decisions and recommendations. Explanations typically increase trust, user acceptance and retention. The need for explanations is on the rise due to the increasing public concerns about AI and the emergence of new laws, such as the General Data Protection Regulation (GDPR) in Europe. However, users are different in their needs for explanations, and such needs can depend on their dynamic context. Explanations suffer the risk of being seen as information overload, and this makes personalisation more needed. In this paper, we review literature around personalising explanations in intelligent systems. We synthesise a conceptualisation that puts together various aspects being considered important for the personalisation needs and implementation. Moreover, we identify several challenges which would need more research, including the frequency of explanation and their evolution in tandem with the ongoing user experience.
https://eprints.bournemouth.ac.uk/34805/
Source: Scopus