Dr Mohammad Naiseh
- mnaiseh1 at bournemouth dot ac dot uk
- Lecturer in Data Science & AI
- Poole House P334b, Talbot Campus, Fern Barrow, Poole, BH12 5BB
- Human-computer interaction
- Machine learning
Copy and share the URL for this profile:
Use a QR Code reader on a mobile device to add this person as a contact:
Mohammad is a PhD candidate in the Department of Computing and Informatics, Bournemouth University, UK. He received his MSc in Informatics Engineering from Thishreen University, Syria. He worked as a teaching assistant at Thisreen University, Syria. His research is focused on the explainability and transparency of AI-based decision-making tools, i.e., the systematic design of explainability solutions for collaborative Human-AI decision-making environments.
Mohammad has a keen interest in studying user trust, which he defines as a crucial requirement of deploying AI-based solutions in real-world problems as it has dynamic nature (over-trust and under-trust). He focuses on the principles, methods and tools needed to engineer trust-aware technology to calibrate trust in such technologies.
He is currently working on his PhD titled "A Design Method for Explainable AI Interfaces to enhance trust calibration" to guide designers and developers in enhancing trust calibration through XAI interface design.
Human-AI teaming and Explainable AI
- Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R., 2023. How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human Computer Studies, 169.
- Naiseh, M., Bentley, C. and Ramchurn, S.D., 2022. Trustworthy Autonomous Systems (TAS): Engaging TAS experts in curriculum design.
- Naiseh, M., Cemiloglu, D., Al Thani, D., Jiang, N. and Ali, R., 2021. Explainable Recommendations and Calibrated Trust: Two Systematic User Errors. Computer, 54 (10), 28-37.
- Naiseh, M., Al-Thani, D., Jiang, N. and Ali, R., 2021. Explainable recommendation: when design meets trust calibration. World Wide Web, 24 (5), 1857-1884.
- Lamb, S., Naiseh, M., Clark, J., Ramchurn, S., Norman, T. and Naiseh, M., 2023. Learning from Expert Teams. In: Contemporary Ergonomics & Human Factors 2022 25-26 April 2022 Birmingham, UK.
- Naiseh, M., 2022. Industry Led Use-Case Development for Human-Swarm Operations. In: AAAI 2022 Spring Symposium Series 21-23 March 2022 Stanford University.
- Cemiloglu, D., Naiseh, M., Catania, M., Oinas-Kukkonen, H. and Ali, R., 2021. The Fine Line Between Persuasion and Digital Addiction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12684 LNCS, 289-307.
- Naiseh, M., Jiang, N., Ma, J. and Ali, R., 2020. Personalising explainable recommendations: Literature and conceptualisation. Advances in Intelligent Systems and Computing, 1160 AISC, 518-533.
- Aldhayan, M., Naiseh, M., McAlaney, J. and Ali, R., 2020. Online Peer Support Groups for Behavior Change: Moderation Requirements. Lecture Notes in Business Information Processing, 385 LNBIP, 157-173.
- Naiseh, M., Jiang, N., Ma, J. and Ali, R., 2020. Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks. Lecture Notes in Business Information Processing, 385 LNBIP, 212-228.
- Naiseh, M., 2020. Explainability Design Patterns in Clinical Decision Support Systems. Lecture Notes in Business Information Processing, 385 LNBIP, 613-620.
- Naiseh, M., 2021. Explainable recommendation: When design meets trust calibration – Research protocol. Bournemouth University.
- Naiseh, M., 2021. Explainable Recommendations and Calibrated Trust - Research Protocol. Bournemouth University.
- Naiseh, M., 2021. C-XAI: Design Method for Explainable AI Interfaces to Enhance Trust Calibration. PhD Thesis. Bournemouth University, Faculty of Science and Technology.
Profile of Teaching UG
- Economics of Information Security
- Security Information and Event Management
- Ethical Hacking & Countermeasures
- Networks and Cyber Security
- PGR representative, SciTech Computing & Informatics Research
- Good Clinical Practice (GCP), 01 Nov 2019
- MSc in Master in Computer Science (2017)