The paper explores the use of large language models (LLMs) for generating concise summaries from mental state examinations (MSEs). The authors first developed a 12-item MSE questionnaire and collected data from 300 participants. They then evaluated the performance of four pre-trained LLMs (BART-base, BART-large-CNN, T5-large, and BART-large-xsum-samsum) with and without fine-tuning on the collected dataset.
The results show that fine-tuning the LLMs, even with limited training data, significantly improves the quality of the generated summaries. The best-performing fine-tuned model, BART-large-CNN, achieved ROUGE-1 and ROUGE-L scores of 0.810 and 0.764, respectively, outperforming the pre-trained models and existing work on medical dialogue summarization.
The authors also assessed the generalizability of the BART-large-CNN model by evaluating it on a publicly available dataset, with promising results. The study highlights the potential of leveraging LLMs to develop scalable, automated systems for conducting initial mental health assessments and generating summaries, which could help alleviate the burden on mental health professionals, especially in regions with limited access to such services.
เป็นภาษาอื่น
จากเนื้อหาต้นฉบับ
arxiv.org
ข้อมูลเชิงลึกที่สำคัญจาก
by Manjeet Yada... ที่ arxiv.org 04-01-2024
https://arxiv.org/pdf/2403.20145.pdfสอบถามเพิ่มเติม