Distilling LLMs' text summarization abilities into a compact, local model enhances performance and interpretability.