nclu_team at SemEval-2023 Task 6: Attention-based Approaches for Large Court Judgement Prediction with Explanation
Authors: Rusnachenko, N., Markchom, T. and Liang, H.
Journal: 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop
Pages: 270-274
Abstract:Legal documents tend to be large in size. In this paper, we provide an experiment with attention-based approaches complemented by certain document processing techniques for judgment prediction. For the prediction of explanation, we consider this as an extractive text summarization problem based on an output of (1) CNN with attention mechanism and (2) self-attention of language models. Our extensive experiments show that treating document endings at first results in a 2.1% improvement in judgment prediction across all the models. Additional content peeling from non-informative sentences allows an improvement of explanation prediction performance by 4% in the case of attention-based CNN models. The best submissions achieved 8th and 3rd ranks on judgment prediction (C1) and prediction with explanation (C2) tasks respectively among 11 participating teams. The results of our experiments are published1
Source: Scopus
nclu_team at SemEval-2023 Task 6: Attention-based Approaches for Large Court Judgement Prediction with Explanation
Authors: Rusnachenko, N., Markchom, T. and Liang, H.
Conference: Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023) Association for Computational Linguistics (ACL)
Dates: 13-14 July 2023
Journal: 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop
Pages: 270-274
DOI: 10.18653/v1/2023.semeval-1.36
Abstract:Legal documents tend to be large in size. In this paper, we provide an experiment with attention-based approaches complemented by certain document processing techniques for judgment prediction. For the prediction of explanation, we consider this as an extractive text summarization problem based on an output of (1) CNN with attention mechanism and (2) self-attention of language models. Our extensive experiments show that treating document endings at first results in a 2.1% improvement in judgment prediction across all the models. Additional content peeling from non-informative sentences allows an improvement of explanation prediction performance by 4% in the case of attention-based CNN models. The best submissions achieved 8th and 3rd ranks on judgment prediction (C1) and prediction with explanation (C2) tasks respectively among 11 participating teams. The results of our experiments are published1
Source: Manual