Temel Kavramlar
Legion enhances Pre-trained Language Models for more accurate GitHub topic recommendations by addressing long-tailed distribution challenges.
Özet
オープンソース開発におけるGitHubのトピック推薦を向上させるために、LegionはPre-trained Language ModelsとDistribution-Balance Lossを活用しています。PTMsの性能を改善し、特に中頻度ラベルで顕著な改善を実現しています。Legionは、従来の手法よりも優れたパフォーマンスを示しています。
İstatistikler
Head: BERT F1-score 0.409, BART F1-score 0.416, RoBERTa F1-score 0.366, ELECTRA F1-score 0.358
Mid: BERT F1-score 0.081, BART F1-score 0.049, RoBERTa F1-score 0.0, ELECTRA F1-score 0.0
Tail: BERT F1-score 0.0, BART F1-score 0.0, RoBERTa F1-score 0.0, ELECTRA F1-score 0.0
Alıntılar
"Legion can significantly improve the performance of all PTMs by up to 26% in terms of average F1 score."
"Legion showcases its ability by aiding PTMs in achieving an F1 score of approximately 0.4 for mid-frequency labels."
"Legion outperforms both state-of-the-art baselines with an increase in the average F1 score of up to 16.4%."