Surgical Instruction Generation with Transformers
Authors: Zhang, J., Nie, Y., Chang, J. and Zhang, J.J.
Journal: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume: 12904 LNCS
Pages: 290-299
eISSN: 1611-3349
ISSN: 0302-9743
DOI: 10.1007/978-3-030-87202-1_28
Abstract:Automatic surgical instruction generation is a prerequisite towards intra-operative context-aware surgical assistance. However, generating instructions from surgical scenes is challenging, as it requires jointly understanding the surgical activity of current view and modelling relationships between visual information and textual description. Inspired by the neural machine translation and imaging captioning tasks in open domain, we introduce a transformer-backboned encoder-decoder network with self-critical reinforcement learning to generate instructions from surgical images. We evaluate the effectiveness of our method on DAISI dataset, which includes 290 procedures from various medical disciplines. Our approach outperforms the existing baseline over all caption evaluation metrics. The results demonstrate the benefits of the encoder-decoder structure backboned by transformer in handling multimodal context.
https://eprints.bournemouth.ac.uk/36118/
Source: Scopus
Surgical Instruction Generation with Transformers
Authors: Zhang, J., Nie, Y., Chang, J. and Zhang, J.J.
Journal: MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT IV
Volume: 12904
Pages: 290-299
eISSN: 1611-3349
ISBN: 978-3-030-87201-4
ISSN: 0302-9743
DOI: 10.1007/978-3-030-87202-1_28
https://eprints.bournemouth.ac.uk/36118/
Source: Web of Science (Lite)
Surgical Instruction Generation with Transformers
Authors: Zhang, J., Nie, Y., Chang, J. and Zhang, J.
Conference: MICCAI 2021: International Conference on Medical Image Computing and Computer-Assisted Intervention
Pages: 290-299
ISBN: 9783030872014
ISSN: 0302-9743
Abstract:Automatic surgical instruction generation is a prerequisite towards intra-operative context-aware surgical assistance. However, generating instructions from surgical scenes is challenging, as it requires jointly understanding the surgical activity of current view and modelling relationships between visual information and textual description. Inspired by the neural machine translation and imaging captioning tasks in open domain, we introduce a transformer-backboned encoder-decoder network with self-critical reinforcement learning to generate instructions from surgical images. We evaluate the effectiveness of our method on DAISI dataset, which includes 290 procedures from various medical disciplines. Our approach outperforms the existing baseline over all caption evaluation metrics. The results demonstrate the benefits of the encoder-decoder structure backboned by transformer in handling multimodal context.
https://eprints.bournemouth.ac.uk/36118/
Source: BURO EPrints