Module 3 AI Narratives - Eryi

My undergraduate thesis delved into the realm of artificial intelligence in storytelling. At that time, AI-generated writing was just emerging, sparking a curious notion in my mind: Would readers be able to distinguish between works written by artificial intelligence and those crafted by human storytellers? Could there be biases against the narrative produced by AI? With this intriguing thought, I embarked on an experiment, rooted in reader reception theories.


The experiment aimed to explore the reception of AI-generated narratives by placing them alongside human-authored works. I divided a piece of AI-generated writing into two parts, attributing authorship to two different entities — one being a renowned human writer and the other, an artificial intelligence. This design prompted readers to evaluate various elements of the narrative. Employing quantitative analysis, I sought to discern any significant differences between the assessments of the two versions.


Guess what? In the end, I discovered that readers didn't exhibit significant differences in their perceptions between the two distinct authors — except for one aspect: "imagination." It appears that readers subconsciously do not fully acknowledge the imaginative capabilities of AI. However, when it comes to the story itself, character development, and world-building, they expressed a high level of approval for the AI's performance.


This unique approach aimed to unravel not only the perceptual distinctions between AI and human storytelling but also to gauge potential biases that readers might exhibit towards AI-generated literary works. The methodology encompassed a thorough examination of reader responses, delving into the intricacies of how the narratives were perceived and assessed in a comparative framework.


Upon revisiting these reading materials, I've developed some new insights. I've come to realize that AI is consistently devoted to the creation of science fiction literature — the discourse I selected at that time was undeniably within the realm of science fiction, and the writers involved in the project were all sci-fi authors. As highlighted in the article "AI in Fiction and the Future of War" by David C. Benson (2022), artificial intelligence itself is inherently intertwined with science fiction literature, having its origins in the realm of "conceptual" creation within the genre. Whether portrayed as a villain or a savior, AI's identity is shaped by literary narratives.


I came across Ginger Han's reflections from Module 3: Can AI create stories that evoke a sense of sadness? This sparked a similar thought in me — while AI has been extensively utilized for algorithmic copywriting, when it comes to literary creation, it seems fixated on imagining technology, functionality, and the future. However, there hasn't been much exploration into capturing other human emotions.


I experimented with an AI story generation tool linked in the article, and the results were amusing. I attempted to prompt it to contemplate the emotional journey of a boy bullied at school, trying to evoke a specific set of emotions. To my surprise, it immediately took an unexpected turn — the AI wrote that the boy discovered his own magical power. By the second line, he had transformed into the Earth's savior.




The Korean drama "The Glory," as initially envisioned in my mind, suddenly transformed into the "X-Men" film series. Delving deeper, I pondered whether this represents an idealized form of revenge or an extremely violent form of retribution. AI seems to be accustomed to this mode of thinking — resolving everything through a "science fiction" lens. In this regard, it aligns with certain human creative works — when the legal and societal systems lack justice, resorting to primal, idealized, and instinctive methods to address issues, governing evildoers.


Reflecting on the idealized vengeance portrayed in the envisioned Korean drama or the extreme retribution theme reminiscent of "X-Men," it raises questions about the role of AI in perpetuating such narrative patterns. Does AI inherently lean towards these futuristic, often fantastical, resolutions as a default mode of problem-solving? And based on the fact that the corpus fed by AI comes from previous human creations, is this inclination a reflection of the broader human desire for justice and order in a world where conventional systems may fall short?


Furthermore, contemplating whether these narratives serve as an outlet for the human need to envision justice when real-world mechanisms seem inadequate adds another layer to the discussion. It prompts consideration of the societal impact of storytelling, AI's role in shaping these narratives, and the potential consequences of aligning creative works with an instinctive, idealized form of justice.


In essence, the exploration of AI's storytelling tendencies not only offers insights into the technological capabilities but also serves as a mirror reflecting certain aspects of the human psyche — our inherent quest for justice, order, and the perennial search for ways to address the complexities of the world.

Comments