The study explores the use of large language models (LLMs) for simplifying complex biomedical literature to enhance public health literacy. Various models like T5, SciFive, BART, GPT-3.5, GPT-4, and BioGPT were fine-tuned and evaluated using metrics like BLEU, ROUGE, SARI, and BERTScore. BART-Large with Control Token mechanisms showed promising results in human evaluations for simplicity but lagged in meaning preservation compared to T5-Base. The research highlights the importance of text simplification in promoting health literacy and provides insights into future directions for this task.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Zihao Li,Sam... a las arxiv.org 03-19-2024
https://arxiv.org/pdf/2309.13202.pdfConsultas más profundas