Personalising explainable recommendations: Literature and conceptualisation

This data was imported from Scopus:

Authors: Naiseh, M., Jiang, N., Ma, J. and Ali, R.

Journal: Advances in Intelligent Systems and Computing

Volume: 1160 AISC

Pages: 518-533

eISSN: 2194-5365

ISBN: 9783030456900

ISSN: 2194-5357

DOI: 10.1007/978-3-030-45691-7_49

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020. Explanations in intelligent systems aim to enhance a users’ understandability of their reasoning process and the resulted decisions and recommendations. Explanations typically increase trust, user acceptance and retention. The need for explanations is on the rise due to the increasing public concerns about AI and the emergence of new laws, such as the General Data Protection Regulation (GDPR) in Europe. However, users are different in their needs for explanations, and such needs can depend on their dynamic context. Explanations suffer the risk of being seen as information overload, and this makes personalisation more needed. In this paper, we review literature around personalising explanations in intelligent systems. We synthesise a conceptualisation that puts together various aspects being considered important for the personalisation needs and implementation. Moreover, we identify several challenges which would need more research, including the frequency of explanation and their evolution in tandem with the ongoing user experience.

The data on this page was last updated at 12:11 on June 24, 2020.