핵심 개념
DoRA enhances fine-tuning by decomposing weights into magnitude and direction components, outperforming LoRA.
통계
LoRA는 평균 정확도를 74.7% 달성했습니다.
DoRA는 평균 정확도를 78.1%로 향상시켰습니다.
인용구
"Weight decomposition reveals distinct learning patterns between LoRA and FT."
"DoRA consistently outperforms LoRA across various fine-tuning tasks."