toplogo
Sign In

People's Perceptions Toward Bias and Related Concepts in Large Language Models: A Systematic Review


Core Concepts
Large language models (LLMs) have brought breakthroughs in various tasks, but researchers are actively working on evaluating biases. Understanding people's perceptions of LLMs is crucial for future development.
Abstract
The systematic review explores how people perceive bias and related concepts in large language models (LLMs). It analyzes the advantages, biases, and conflicting perceptions of LLMs across various applications. The study also delves into factors influencing perceptions and concerns about LLM applications. The review highlights that while LLMs offer time-saving benefits and cross-cultural communication enhancements, they also face challenges with distribution bias leading to generic responses. Participants showed varying levels of awareness regarding inherent biases in LLM outputs. Conflicting views emerged on the coherence, impacts, appropriateness, efficiency, effectiveness, explainability, and anthropomorphism of LLMs. Factors such as task dependencies, domain limitations, personal backgrounds, contextual needs, and expectations influence individuals' perceptions of LLM performances. The study underscores the importance of understanding diverse perspectives to enhance user experience and mitigate risks associated with biased outputs.
Stats
"For this paper, we will use the term ‘bias’ to refer to any systematic favoring of certain artifacts or behavior over others that are equally valid" [3] "In other cases, humans tend to show automation bias, e.g., automatically relying or over-relying on the output produced by a chatbot." [10] "Further, it is well known that commonly used hate-speech datasets are known to have issues with bias and fairness" [90] "Like other LLMs, ChatGPT might have intrinsic biases due to imbalanced training data" [40]
Quotes
"People generally experience feelings of shame and guilt when they engage in morally unacceptable behaviors or when they violate norms they have internalized." - Study participant [15] "It is insufficient to merely exclude toxic data from training... positive biases where models tend to agree rather than contradict would lead to undesirable outcomes." - Researcher [90]

Deeper Inquiries

How can biases in large language models be effectively mitigated?

Biases in large language models can be effectively mitigated through a combination of technical solutions, ethical considerations, and ongoing evaluation processes. Here are some strategies to mitigate biases: Diverse and Representative Training Data: Ensuring that the training data used for LLMs is diverse, representative, and free from biases is crucial. This may involve carefully curating datasets or using techniques like data augmentation to address underrepresented groups. Bias Detection Algorithms: Implementing bias detection algorithms during the training phase to identify and mitigate biases as they arise. These algorithms can help developers understand where biases exist within the model. Regular Audits and Evaluations: Conducting regular audits and evaluations of LLMs to assess their performance for bias detection. This involves testing the model's outputs across different demographics to identify any discriminatory patterns. Transparency and Explainability: Making LLMs more transparent by providing explanations for how decisions are made can help users understand why certain outputs are generated, increasing trust in the system. Ethical Review Boards: Establishing ethical review boards or committees that oversee the development process of LLMs can provide guidance on potential bias issues.

How do cultural norms impact people's perceptions of bias in technology like large language models?

Cultural norms play a significant role in shaping people's perceptions of bias in technology such as large language models (LLMs). Here are some ways cultural norms impact these perceptions: Language Use Patterns: Cultural norms influence language use patterns, which can inadvertently introduce biases into LLMs based on societal stereotypes or prejudices. Societal Expectations: Cultural norms dictate societal expectations around topics such as gender roles, race relations, or social hierarchies, which may manifest as biased outcomes in LLM-generated content. Interpretation of Outputs: People from different cultural backgrounds may interpret biased outputs differently based on their own experiences and perspectives shaped by cultural norms. 4Mitigating Bias Through Diversity: Emphasizing diversity within teams developing LLMs helps ensure that a range of perspectives is considered during development.

What ethical considerations should be prioritized when developing user-centered large language models?

When developing user-centered large language models (LLMs), several key ethical considerations should be prioritized: 1Informed Consent: Ensuring that users are fully informed about how their data will be used by the LLM and obtaining explicit consent before utilizing their information 2Privacy Protection: Safeguarding user privacy by implementing robust security measures to protect sensitive personal information stored or processed by the model 3Fairness & Transparency: Prioritizing fairness by actively working towards eliminating biases within the model while maintaining transparency about its limitations 4Accountability & Responsibility: Holding developers accountable for any unintended consequences resulting from biased outputs produced by an LMM These considerations aim to uphold principles of fairness, transparency accountability throughout all stages of development
0
visual_icon
generate_icon
translate_icon
scholar_search_icon
star