toplogo
Sign In
insight - NaturalLanguageProcessing - # AI Storytelling

The Pygmalion Myth in the Age of AI: A Comparative Analysis of Human and AI-Generated Narratives on Love and Artificial Humans


Core Concepts
Both human and AI-generated narratives, when prompted with the Pygmalion myth, reveal ingrained cultural biases, particularly regarding gender roles and relationships with artificial humans, while also highlighting the potential of AI to challenge these norms and offer novel narrative possibilities.
Abstract

This research paper presents a comparative analysis of human-written and AI-generated narratives based on the Pygmalion myth.

Research Objective:
The study investigates how the Pygmalion myth, a trope about a human falling in love with their creation, manifests in contemporary storytelling, comparing narratives written by human crowdworkers and those generated by OpenAI's GPT-3.5 and GPT-4 language models. The research aims to uncover cultural biases and explore the potential of AI in generating innovative narratives.

Methodology:
The researchers collected 250 stories authored by crowdworkers on Amazon Mechanical Turk in 2019 and 80 stories generated by GPT-3.5 and GPT-4 in 2023. All participants responded to identical prompts about a human creating and potentially falling in love with an artificial human. The analysis combined quantitative methods, including logistic regression to examine gender bias, with qualitative comparisons of narrative elements like plot, character, and themes.

Key Findings:

  • Both human and AI-generated narratives predominantly featured the Pygmalion myth within a scientific or technological context, reflecting the contemporary fascination with AI and robotics.
  • While human narratives exhibited greater thematic diversity, AI-generated stories, particularly those from GPT-4, demonstrated a surprising progressiveness in gender roles and sexuality, frequently casting female characters in traditionally male roles and depicting same-sex relationships.
  • Despite advancements, both human and AI narratives revealed persistent gender biases in character descriptions and attributes, often reinforcing stereotypical portrayals of male and female characters.

Main Conclusions:
The study suggests that while both human and AI storytelling are shaped by cultural biases, AI, particularly with advancements in language models like GPT-4, has the potential to challenge these norms and introduce greater diversity in character representation and narrative possibilities. The research highlights the importance of critically examining both human and AI narratives for implicit biases and leveraging AI's capabilities for positive social impact.

Significance:
This research contributes to the growing field of artificial humanities, exploring the intersection of AI and human creativity. The findings have implications for understanding how cultural narratives influence the development and perception of AI, as well as for harnessing AI's potential to challenge societal biases and foster more inclusive storytelling.

Limitations and Future Research:
The study acknowledges limitations in the size and scope of the analyzed corpus. Future research could expand the analysis to include a wider range of AI models, explore different cultural contexts, and investigate the impact of user interaction on AI-generated narratives.

edit_icon

Customize Summary

edit_icon

Rewrite with AI

edit_icon

Generate Citations

translate_icon

Translate Source

visual_icon

Generate MindMap

visit_icon

Visit Source

Stats
The study analyzed 250 stories written by Amazon Mechanical Turk crowdworkers and 80 stories generated by OpenAI’s LLM models GPT-3.5 and GPT-4. The gender distribution of fictional characters in the human-written stories was 350 (56%) male, 256 (41%) female, 12 (2%) non-binary, and 6 no gender (1%). GPT-4 cast more female creators (21/40) than male creators (19/40) and more female artificial humans (24 female, 18 male). 12.5% of GPT-generated stories, all created by GPT-4, presented a same-sex relationship. 7.3% of human-written stories featured a homosexual relationship.
Quotes
"The paper proposes a framework that combines behavioral and computational experiments employing fictional prompts as a novel tool for investigating cultural artifacts and social biases in storytelling both by humans and generative AI." "The analysis reveals that narratives from GPT-3.5 and particularly GPT-4 are more progressive in terms of gender roles and sexuality than those written by humans." "While AI narratives with default settings and no additional prompting can occasionally provide innovative plot twists, they offer less imaginative scenarios and rhetoric than human-authored texts."

Deeper Inquiries

How might the increasing use of AI in creative writing influence the evolution of traditional storytelling tropes and cultural narratives?

The increasing use of AI in creative writing has the potential to both reinforce and subvert traditional storytelling tropes and cultural narratives. Reinforcement: Amplifying Existing Biases: As the text demonstrates, AI models like GPT-3.5 and GPT-4 are trained on massive datasets of existing text and code. This means they can inadvertently learn and perpetuate existing biases present in those datasets, including those related to gender, race, and cultural representation. Consequently, AI might reproduce stereotypical representations instead of fostering diverse and nuanced storytelling. Standardizing Narratives: The ease of generating text with AI could lead to a homogenization of narratives. If writers rely heavily on AI to generate plot points, characters, or dialogue, it might result in stories that lack originality and adhere to predictable formulas, ultimately stifling creative exploration and the evolution of storytelling. Subversion: Challenging Traditional Tropes: As seen with GPT-4's more progressive representation of gender and sexuality, AI can be used to challenge traditional tropes. By identifying and subverting stereotypical patterns in its training data, AI can help writers create more inclusive and representative stories that reflect evolving societal norms. Exploring New Narrative Possibilities: AI can analyze and process information in ways that humans cannot, potentially leading to the discovery of new narrative structures, genres, and storytelling techniques. This could result in innovative and unconventional narratives that push the boundaries of traditional storytelling. Democratizing Storytelling: AI writing tools can make creative writing more accessible to a wider audience, empowering individuals who may not have considered themselves writers to express themselves and share their stories. This influx of new voices and perspectives could contribute to a diversification of narratives and a broader representation of human experiences. Ultimately, the influence of AI on storytelling will depend on how it is developed, implemented, and, crucially, guided by human creativity and ethical considerations.

Could the observed gender and sexuality progressiveness in GPT-4 be a result of biased training data aimed at presenting the model as ethically aligned, rather than a genuine reflection of evolving societal norms?

The gender and sexuality progressiveness observed in GPT-4's outputs, as exemplified in the text, raises a crucial question about the nature of its training data and the potential for "ethics washing." It is highly likely that GPT-4's training data has undergone significant curation and modification to align with ethical guidelines and present the model as unbiased and socially responsible. This could involve: Overrepresentation of Progressive Content: The training data might intentionally include a disproportionate amount of content that reflects progressive values related to gender, sexuality, and representation. This could lead the model to generate outputs that appear more progressive than the actual distribution of these values in society. Filtering of Sensitive Content: Conversely, the training data might have undergone extensive filtering to remove or downplay content deemed offensive, harmful, or biased. While this can mitigate the generation of overtly problematic outputs, it can also create a skewed representation of reality and mask the persistence of certain biases in society. Therefore, it is difficult to definitively claim that GPT-4's progressiveness is a "genuine reflection of evolving societal norms." It is more likely a combination of: Reflecting Curated Data: The model's outputs primarily reflect the values and biases embedded in its curated training data, which may not perfectly mirror real-world societal norms. Learning from User Feedback: As the text suggests, AI models can learn and adapt based on human feedback. If users consistently reward the model for generating progressive content, it will likely continue to do so, even if it doesn't fully grasp the underlying social and ethical implications. This highlights the importance of transparency in AI development and the need for ongoing critical evaluation of AI-generated content to discern genuine progress from engineered ethical alignment.

If AI can learn and adapt its storytelling based on human feedback, what ethical considerations arise in guiding its creative output and shaping its understanding of human values?

The ability of AI to learn and adapt its storytelling based on human feedback presents a complex ethical landscape. Here are some key considerations: 1. Defining and Encoding "Human Values": Whose Values? Human values are diverse, subjective, and often conflicting. Determining which values to prioritize in AI training raises questions about representation, power dynamics, and the potential for imposing a dominant worldview. Cultural Sensitivity: Values vary significantly across cultures. AI development needs to account for this diversity and avoid imposing a culturally specific understanding of ethics and values onto a global audience. Evolving Values: Human values are not static; they evolve over time. AI systems need mechanisms to adapt to these shifts and avoid becoming stagnant representations of outdated norms. 2. Preventing the Amplification of Bias: Feedback Loops: If AI primarily learns from a homogenous group of users, it risks amplifying existing biases and failing to represent the diversity of human experiences. Unintentional Reinforcement: Users might unknowingly reward AI for generating content that confirms their existing biases, even if those biases are harmful or inaccurate. 3. Maintaining Human Oversight and Agency: Accountability: Clear lines of responsibility need to be established for the outputs of AI systems, especially when those outputs have social or cultural impact. Transparency: The decision-making processes of AI, particularly how it learns from feedback, should be transparent and open to scrutiny to ensure ethical development and use. Human-in-the-Loop: Maintaining human oversight in the creative process is crucial. AI should be viewed as a tool to augment human creativity, not replace it entirely. 4. Considering the Impact on Human Creativity: Over-Reliance: An over-reliance on AI for creative inspiration could stifle human imagination and lead to a homogenization of narratives. Authorship and Ownership: The use of AI in creative writing raises questions about authorship, intellectual property, and the value of human creativity in an AI-driven world. Addressing these ethical considerations requires a multidisciplinary approach involving AI developers, ethicists, social scientists, and artists. Open dialogue, ongoing research, and a commitment to responsible AI development are crucial to navigating this evolving landscape and ensuring that AI serves as a tool for positive social and cultural impact.
0
star