A specialized Language Model (LLM) tailored for medical conversations has been developed to address the limitations of general-purpose models like GPT-4. The model, continuously trained on a 13B Llama2 base, excels in automated scribing tasks, surpassing GPT-4 in PubMedQA with 76.6% accuracy and matching its performance in summarizing medical dialogues into SOAP notes. Notably, it outperforms human scribes by capturing more correct medical concepts with higher correctness and completeness. The need for domain-specific models in healthcare is emphasized due to the critical nature of precision and understanding in this field. Existing LLMs designed for healthcare often excel in medical Q&A but struggle to create complete EHR-compatible medical notes. By leveraging diverse datasets and continued pretraining techniques, the model can generate physician-approved SOAP notes from doctor-patient conversations efficiently.
Іншою мовою
із вихідного контенту
arxiv.org
Ключові висновки, отримані з
by Dong Yuan,Et... о arxiv.org 03-15-2024
https://arxiv.org/pdf/2403.09057.pdfГлибші Запити