Core Concepts
State-of-the-art large language models are investigated for improving the readability of biomedical abstracts through text simplification.
Abstract
The study explores the use of large language models (LLMs) for simplifying complex biomedical literature to enhance public health literacy. Various models like T5, SciFive, BART, GPT-3.5, GPT-4, and BioGPT were fine-tuned and evaluated using metrics like BLEU, ROUGE, SARI, and BERTScore. BART-Large with Control Token mechanisms showed promising results in human evaluations for simplicity but lagged in meaning preservation compared to T5-Base. The research highlights the importance of text simplification in promoting health literacy and provides insights into future directions for this task.
Stats
BART-L-w-CTs achieved a SARI score of 46.54.
T5-base reported the highest BERTScore of 72.62.
Quotes
"Applying Natural Language Processing (NLP) models allows for quick accessibility to lay readers."
"BART-Large with Control Token mechanisms reported high simplicity scores."