Enabling trustworthiness in human-swarm systems through a digital twin
Authors: Soorati, M.D., Naiseh, M., Hunt, W., Parnell, K., Clark, J. and Ramchurn, S.D.
Pages: 93-125
DOI: 10.1016/B978-0-443-15988-6.00008-X
Abstract:Robot swarms are highly dynamic systems that exhibit fault-tolerant behavior in accomplishing given tasks. Applications of swarm robotics are very limited due to the lack of complex decision-making capability. Real-world applications are only possible if we use human supervision to monitor and control the behavior of the swarm. Ensuring that human operators can trust the swarm system is one of the key challenges in human-swarm systems. This chapter presents a digital twin for trustworthy human-swarm teaming. The first element in designing such a simulation platform is to understand the trust requirements to label a human-swarm system as trustworthy. In order to outline the key trust requirements, we interviewed a group of experienced uncrewed aerial vehicle (UAV) operators and collated their suggestions for building and repairing trusts in single and multiple UAV systems. We then performed a survey to gather swarm experts’ points of view on creating a taxonomy for explainability in human-swarm systems. This chapter presents a digital twin platform that implements a disaster management use case and has the capacity to meet the extracted trust and explainability requirements.
Source: Scopus