Use a QR Code reader on a mobile device to add this person as a contact:
Mohammad (Mo) Naiseh holds the position of Lecturer in AI and Data Science at Bournemouth University, UK. Within the Department of Computing and Informatics, he combines his academic and professional expertise in Artificial Intelligence (AI) and Human-Computer Interaction (HCI). He completed his PhD in Computing at Bournemouth University, UK, with a dissertation titled "C-XAI: Design Method for Explainable AI Interfaces to Enhance Trust Calibration."
Previously, he served as a post-doctoral research fellow at the University of Southampton for a period of two years. During this time, his research focused on trustworthy AI, specifically emphasizing the explainability, interpretability, and fairness of AI systems. His primary objective is to develop AI systems that align with human values and ethics, ensuring their positive impact on society and the economy. To achieve this, he explores the integration of emerging AI algorithms with human-centered design practices. His research methodology involves employing quantitative and qualitative methods to generate recommendations for technology design...
Mohammad's particular area of interest lies in the study of user trust, which he considers a critical requirement for deploying AI-based solutions to real-world problems. He recognizes the dynamic nature of trust, encompassing both over-trust and under-trust, and aims to understand and address this phenomenon. His research focuses on the principles, methods, and tools necessary to engineer trust-aware technology and calibrate trust within such systems.more
- Human-AI teaming - Human-Centred AI - Explainable AI
- Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R., 2023. How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human Computer Studies, 169.
- Naiseh, M., Bentley, C. and Ramchurn, S.D., 2022. Trustworthy Autonomous Systems (TAS): Engaging TAS experts in curriculum design.
- Naiseh, M., Cemiloglu, D., Al Thani, D., Jiang, N. and Ali, R., 2021. Explainable Recommendations and Calibrated Trust: Two Systematic User Errors. Computer, 54 (10), 28-37.
- Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R., 2021. Explainable recommendation: when design meets trust calibration. World Wide Web, 24 (5), 1857-1884.
- Naiseh, M. and Shukla, P., 2023. The well-being of Autonomous Vehicles (AVs) users under uncertain situations. ACM International Conference Proceeding Series.
- Cai, A., Bentley, C.M., Zamani, E., Naiseh, M. and Sbaffi, L., 2023. Intersectional Analysis of the Challenges and Opportunities of Equitable Remote Operation in the UK Maritime Sector. ACM International Conference Proceeding Series.
- Lamb, S., Naiseh, M., Clark, J., Ramchurn, S., Norman, T. and Naiseh, M., 2023. Learning from Expert Teams. In: Contemporary Ergonomics & Human Factors 2022 25-26 April 2022 Birmingham, UK.
- Naiseh, M., 2022. Industry Led Use-Case Development for Human-Swarm Operations. In: AAAI 2022 Spring Symposium Series 21-23 March 2022 Stanford University.
- Naiseh, M., Bentley, C., Ramchurn, S., Williams, E., Awad, E. and Alix, C., 2022. Methods, Tools and Techniques for Trustworthy Autonomous Systems (TAS) Design and Development. EICS 2022 - Companion of the 2022 ACM SIGCHI Symposium on Engineering Interactive Computing Systems, 66-69.
- Naiseh, M., Bentley, C. and Ramchurn, S.D., 2022. Trustworthy Autonomous Systems (TAS): Engaging TAS experts in curriculum design. IEEE Global Engineering Education Conference, EDUCON, 2022-March, 901-905.
- Cemiloglu, D., Naiseh, M., Catania, M., Oinas-Kukkonen, H. and Ali, R., 2021. The Fine Line Between Persuasion and Digital Addiction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12684 LNCS, 289-307.
- Al-Mansoori, R.S., Naiseh, M., Al-Thani, D. and Ali, R., 2021. Digital Wellbeing for All: Expanding Inclusivity to Embrace Diversity in Socio-Emotional Status. 34th British Human Computer Interaction Conference Interaction Conference, BCS HCI 2021, 256-261.
- Naiseh, M., Al-Mansoori, R.S., Al-Thani, D., Jiang, N. and Ali, R., 2021. Nudging through Friction: an Approach for Calibrating Trust in Explainable AI. Proceedings of 2021 8th IEEE International Conference on Behavioural and Social Computing, BESC 2021.
- Naiseh, M., Jiang, N., Ma, J. and Ali, R., 2020. Personalising explainable recommendations: Literature and conceptualisation. Advances in Intelligent Systems and Computing, 1160 AISC, 518-533.
- Aldhayan, M., Naiseh, M., McAlaney, J. and Ali, R., 2020. Online Peer Support Groups for Behavior Change: Moderation Requirements. Lecture Notes in Business Information Processing, 385 LNBIP, 157-173.
- Naiseh, M., Jiang, N., Ma, J. and Ali, R., 2020. Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks. Lecture Notes in Business Information Processing, 385 LNBIP, 212-228.
- Naiseh, M., 2020. Explainability Design Patterns in Clinical Decision Support Systems. Lecture Notes in Business Information Processing, 385 LNBIP, 613-620.
- Naiseh, M., 2021. Explainable recommendation: When design meets trust calibration – Research protocol. Bournemouth University.
- Naiseh, M., 2021. Explainable Recommendations and Calibrated Trust - Research Protocol. Bournemouth University.
- Naiseh, M., 2021. C-XAI: Design Method for Explainable AI Interfaces to Enhance Trust Calibration. PhD Thesis. Bournemouth University, Faculty of Science and Technology.
- Naiseh, M., Soorati, M.D. and Ramchurn, S., 2023. Outlining the design space of eXplainable swarm (xSwarm): experts perspective.
- Abioye, A.O., Naiseh, M., Hunt, W., Clark, J., Ramchurn, S.D. and Soorati, M.D., 2023. The Effect of Data Visualisation Quality and Task Density on Human-Swarm Interaction.
Profile of Teaching UG
- Economics of Information Security
- Security Information and Event Management
- Ethical Hacking & Countermeasures
- Networks and Cyber Security
- University of Southampton, Visiting Researcher (2023-2024)
- PGR representative, SciTech Computing & Informatics Research
- Good Clinical Practice (GCP), 01 Nov 2019
- MSc in Master in Computer Science (2017)