Distilling table-based reasoning abilities from Large Language Models (LLMs) into smaller models is effective for scientific table-to-text generation tasks.