Dr Mohammad Naiseh
- 01202 962290
- mnaiseh1 at bournemouth dot ac dot uk
- http://orcid.org/0000-0002-4927-5086
- Lecturer in Data Science & AI
- Poole House P334b, Talbot Campus, Fern Barrow, Poole, BH12 5BB
- Keywords:
- Human-computer interaction
- Machine learning
Biography
Mohammad Naiseh (Mo) is a Responsible AI Researcher and Lecturer in Human-Centred AI at Bournemouth University, with a PhD in Human-Centred AI from Bournemouth University. His research focuses on Explainable AI, trust calibration, and Human-AI interaction, particularly in healthcare and autonomous systems. Naiseh has published in high-impact journals such as AI & Society, Journal of Responsible Technology, and International Journal of Human-Computer Studies. He has presented his work at prestigious conferences, including the International Conference on Robotics and Automation (ICRA), International Symposium on Distributed Autonomous Robotic Systems (DARS), and ACM/IEEE International Conference on Human-Robot Interaction (HRI).
As a Fellow at the UKRI Trustworthy Autonomous Systems Hub, based at the University of Southampton, Naiseh led research several research projects such as explainability in human-swarm environments, ethical autonomous vehicles, and intersectional approaches to trustworthy AI... He has organized workshops at conferences like ICRA, IROS and EICS, and contributed to panels at events such as the TAS-HUB Showcase 2024. Naiseh’s teaching expertise includes courses on Explainable AI, Deep Learning, and Research Methods, and he has supervised numerous master’s and PhD students. His research has had a real-world impact, including collaborations with the Met Police Cyber Crime Unit, Southampton and Poole Hospitals. Naiseh’s work bridges AI design and human trust, emphasizing ethical and socially beneficial AI systems.
moreResearch
- Human-AI teaming - Human-Centred AI - Explainable AI
Journal Articles
- Naiseh, M., Babiker, A., Al-Shakhsi, S., Cemiloglu, D., Al-Thani, D., Montag, C. and Ali, R., 2025. Attitudes Towards AI: The Interplay of Self-Efficacy, Well-Being, and Competency. Journal of Technology in Behavioral Science.
- Naiseh, M., Simkute, A., Zieni, B., Jiang, N. and Ali, R., 2024. C-XAI: A conceptual framework for designing XAI tools that support trust calibration. Journal of Responsible Technology, 17.
- Naiseh, M., Clark, J., Akarsu, T., Hanoch, Y., Brito, M., Wald, M., Webster, T. and Shukla, P., 2024. Trust, risk perception, and intention to use autonomous vehicles: an interdisciplinary bibliometric review. AI and Society.
- Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R., 2023. How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human Computer Studies, 169.
- Naiseh, M., Bentley, C. and Ramchurn, S.D., 2022. Trustworthy Autonomous Systems (TAS): Engaging TAS experts in curriculum design.
- Naiseh, M., Cemiloglu, D., Al Thani, D., Jiang, N. and Ali, R., 2021. Explainable Recommendations and Calibrated Trust: Two Systematic User Errors. Computer, 54 (10), 28-37.
- Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R., 2021. Explainable recommendation: when design meets trust calibration. World Wide Web, 24 (5), 1857-1884.
Chapters
- Naiseh, M., 2024. Social eXplainable AI (Social XAI): Towards Expanding the Social Benefits of XAI. The Impact of Artificial Intelligence on Societies Understanding Attitude Formation Towards AI. Springer Nature.
- Soorati, M.D., Naiseh, M., Hunt, W., Parnell, K., Clark, J. and Ramchurn, S.D., 2024. Enabling trustworthiness in human-swarm systems through a digital twin. Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams. 93-125.
Conferences
- Abioye, A.O., Hunt, W., Gu, Y., Schneiders, E., Naiseh, M., Fischer, J.E., Ramchurn, S.D., Soorati, M.D., Archibald, B. and Sevegnani, M., 2024. The Effect of Predictive Formal Modelling at Runtime on Performance in Human-Swarm Interaction. ACM/IEEE International Conference on Human-Robot Interaction, 172-176.
- Naiseh, M., Soorati, M.D. and Ramchurn, S., 2024. Outlining the Design Space of eXplainable Swarm (xSwarm): Experts’ Perspective. Springer Proceedings in Advanced Robotics, 28, 28-41.
- Malhi, A., Naiseh, M. and Jangra, K., 2024. Real-time Twitter data sentiment analysis to predict the recession in the UK using Graph Neural Networks. 20th International Wireless Communications and Mobile Computing Conference, IWCMC 2024, 1595-1600.
- Durojaye, H. and Naiseh, M., 2024. Explainable AI for Intrusion Detection Systems: A Model Development and Experts’ Evaluation. Lecture Notes in Networks and Systems, 1066 LNNS, 301-318.
- Naiseh, M., Webb, C., Underwood, T., Ramchurn, G., Walters, Z., Thavanesan, N. and Vigneswaran, G., 2024. XAI for Group-AI Interaction: Towards Collaborative and Inclusive Explanations. CEUR Workshop Proceedings, 3793, 249-256.
- Naiseh, M. and Shukla, P., 2023. The well-being of Autonomous Vehicles (AVs) users under uncertain situations. ACM International Conference Proceeding Series.
- Cai, A., Bentley, C.M., Zamani, E., Naiseh, M. and Sbaffi, L., 2023. Intersectional Analysis of the Challenges and Opportunities of Equitable Remote Operation in the UK Maritime Sector. ACM International Conference Proceeding Series.
- Lamb, S., Naiseh, M., Clark, J., Ramchurn, S., Norman, T. and Naiseh, M., 2023. Learning from Expert Teams. In: Contemporary Ergonomics & Human Factors 2022 25-26 April 2022 Birmingham, UK.
- Abioye, A.O., Naiseh, M., Hunt, W., Clark, J., Ramchurn, S.D. and Soorati, M.D., 2023. The Effect of Data Visualisation Quality and Task Density on Human-Swarm Interaction. IEEE International Workshop on Robot and Human Communication, RO-MAN, 1494-1501.
- Naiseh, M., 2022. Industry Led Use-Case Development for Human-Swarm Operations. In: AAAI 2022 Spring Symposium Series 21-23 March 2022 Stanford University.
- Naiseh, M., Bentley, C., Ramchurn, S., Williams, E., Awad, E. and Alix, C., 2022. Methods, Tools and Techniques for Trustworthy Autonomous Systems (TAS) Design and Development. EICS 2022 - Companion of the 2022 ACM SIGCHI Symposium on Engineering Interactive Computing Systems, 66-69.
- Naiseh, M., Bentley, C. and Ramchurn, S.D., 2022. Trustworthy Autonomous Systems (TAS): Engaging TAS experts in curriculum design. IEEE Global Engineering Education Conference, EDUCON, 2022-March, 901-905.
- Cemiloglu, D., Naiseh, M., Catania, M., Oinas-Kukkonen, H. and Ali, R., 2021. The Fine Line Between Persuasion and Digital Addiction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12684 LNCS, 289-307.
- Al-Mansoori, R.S., Naiseh, M., Al-Thani, D. and Ali, R., 2021. Digital Wellbeing for All: Expanding Inclusivity to Embrace Diversity in Socio-Emotional Status. 34th British Human Computer Interaction Conference Interaction Conference, BCS HCI 2021, 256-261.
- Naiseh, M., Al-Mansoori, R.S., Al-Thani, D., Jiang, N. and Ali, R., 2021. Nudging through Friction: an Approach for Calibrating Trust in Explainable AI. Proceedings of 2021 8th IEEE International Conference on Behavioural and Social Computing, BESC 2021.
- Naiseh, M., Jiang, N., Ma, J. and Ali, R., 2020. Personalising explainable recommendations: Literature and conceptualisation. Advances in Intelligent Systems and Computing, 1160 AISC, 518-533.
- Aldhayan, M., Naiseh, M., McAlaney, J. and Ali, R., 2020. Online Peer Support Groups for Behavior Change: Moderation Requirements. Lecture Notes in Business Information Processing, 385 LNBIP, 157-173.
- Naiseh, M., Jiang, N., Ma, J. and Ali, R., 2020. Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks. Lecture Notes in Business Information Processing, 385 LNBIP, 212-228.
- Naiseh, M., 2020. Explainability Design Patterns in Clinical Decision Support Systems. Lecture Notes in Business Information Processing, 385 LNBIP, 613-620.
Reports
- Naiseh, M., 2021. Explainable recommendation: When design meets trust calibration – Research protocol. Bournemouth University.
- Naiseh, M., 2021. Explainable Recommendations and Calibrated Trust - Research Protocol. Bournemouth University.
Theses
- Naiseh, M., 2021. C-XAI: Design Method for Explainable AI Interfaces to Enhance Trust Calibration. PhD Thesis. Bournemouth University, Faculty of Science and Technology.
Preprints
- Naiseh, M., Soorati, M.D. and Ramchurn, S., 2023. Outlining the design space of eXplainable swarm (xSwarm): experts perspective.
- Abioye, A.O., Naiseh, M., Hunt, W., Clark, J., Ramchurn, S.D. and Soorati, M.D., 2023. The Effect of Data Visualisation Quality and Task Density on Human-Swarm Interaction.
Profile of Teaching UG
- Economics of Information Security
- Security Information and Event Management
- Ethical Hacking & Countermeasures
- Networks and Cyber Security
Grants
- PRESERVE - Ethical and privacy-preserving Big Data platform for supporting criminal investigations (UKRI, 01 Sep 2024). Awarded
- REFORMIST: Mirrored decision support fRamEwork FOR Multidisciplinary Teams in Oesophageal cancer (UKRI, 16 Aug 2023). Awarded
- Extreme XP (Horizon Europe, 04 Jan 2023). Awarded
External Responsibilities
- University of Southampton, Visiting Researcher (2023-2024)
Internal Responsibilities
- PGR representative, SciTech Computing & Informatics Research
Attended Training
- Good Clinical Practice (GCP), 01 Nov 2019
Qualifications
- PGCE in PG CERTIFICATE EDUCATION PRACTICE (Bournemouth University, 2024)
- MSc in Master in Computer Science (2017)
Honours
- Fellowship of the Higher Education Academy (Advance HE, 2024)