核心概念
Language models can understand human values through fine-tuning and prompting techniques.
要約
Accurately handling human values in sentences is crucial for understanding tendencies. The article explores fine-tuning and prompt tuning using the Human Value Detection 2023 dataset. Existing datasets have limitations, leading to the proposal of Touch´e23-ValueEval with diverse arguments. Most teams tried classification methods, but top performance remains at an average F1 score of 0.56. The project focuses on comparing prompt-tuning with fine-tuning and evaluating PLMs' capabilities aligned with RLHF.
統計
Most participating teams achieved an average F1 score of 0.56.
RoBERTaLarge w/ MH achieved a macro F1 score of 0.522 after contrastive learning fine-tune.
T5-OA had an F1 score of 0.55 for Benevolence: caring.
GPT2†-BCA scored 0.42 for Stimulation.
Llama-7B reached an F1 score of 0.71 for Universalism: tolerance.
引用
"Prompt tuning leverages knowledge acquired during pre-training, showing excellent performance on few-shot tasks."
"Fine-tuning has been widely explored post BERT release, achieving impressive results but struggling in few-shot scenarios."
"LLMs demonstrate remarkable reasoning capabilities, particularly in systematic prompting processes."