核心概念
LLaMoCo introduces a novel instruction-tuning framework to adapt large language models for expert-level optimization code generation, outperforming existing approaches.
統計資料
실험 결과는 CodeGen (350M) 모델이 LLaMoCo에 의해 성능이 우수하게 향상되었음을 보여줍니다.
引述
"LLaMoCo introduces a novel instruction-tuning framework to adapt large language models for expert-level optimization code generation."