핵심 개념
대형 언어 모델의 편향을 정확하고 효율적으로 식별하는 소프트 프롬프트 튜닝의 중요성
통계
Pre-trained LLMs are capable of performing downstream tasks without requiring large quantities of labelled data (Liu et al. 2023).
Soft-prompt tuning quantifies the biases of LLMs such as OPT(Zhang et al. 2022) and LLaMA (Touvron et al. 2023).
LLMs exhibit measurable biases across protected groups within the sensitive attributes of age, sexuality, and disability.
인용구
"Prompting large language models (LLMs) has gained substantial popularity as pre-trained LLMs are capable of performing downstream tasks without requiring large quantities of labelled data." - Liu et al.
"It is crucial to accurately and efficiently identify biases exhibited by these models and their practical implications." - Content