Centrala begrepp
Creating a dataset for multi-jurisdictional common law court judgment summarization and evaluating the performance of various models.
Sammanfattning
The content discusses the creation of CLSum, the first dataset for summarizing multi-jurisdictional common law court judgment documents. It also explores the use of large language models for data augmentation, summary generation, and evaluation. The article highlights the challenges of low data and computing resources in court judgment summarization and proposes solutions like knowledge-constrained rephrasing and efficient training methods. The experimental results show the performance of different summarization models in zero-shot and few-shot settings, along with automatic and human evaluation results.
Statistik
현재 모델의 성능을 평가하는 여러 메트릭스를 사용하여 요약 결과의 품질을 평가합니다.
LLMs (GPT-3.5-turbo 및 Vicuna)는 모든 CLSum 데이터셋의 zero-shot 설정에서 경쟁력이 있습니다.
몇 가지 사전 훈련된 시퀀스-투-시퀀스 모델의 zero-shot 성능은 unsupervised extractive 방법(LexRank 및 TextRank)보다 좋지 않습니다.
Citat
"Generating high-quality summaries of court judgment documents can facilitate legal practitioners to efficiently review previous cases and assist the general public in accessing how the courts operate and how the law is applied."
"Our experimental results verify that the LLM-based summarization methods can perform well in the few-shot and zero-shot settings."