Concetti Chiave
Enhancing code generation performance of smaller models by distilling the reasoning ability of LLMs through the CodePLAN framework.
Statistiche
"Our approach improves the smaller model’s code generation performance (measured in pass@1 metric) by over 130% on the challenging APPS benchmark."
Citazioni
"CodePLAN utilizes multi-task learning to imbue smaller models with LLMs’ reasoning capabilities."
"Our experiments show that in comparison to the conventional fine-tuning approach, our approach improves the smaller model’s code generation performance by over 130% on the challenging APPS benchmark."