Module 3




Module 3 of the course delves into the growing world of AI narratives. It offers a comprehensive overview of AI in writing, showcasing examples of AI-generated fiction and its impact on readers. This module encourages students to explore the unique capabilities of AI in creating narratives that differ from traditional print stories, examining the potential of AI to transform storytelling.

Learning Outcomes:

  • Gain a broad understanding of AI's role in writing and storytelling.
  • Evaluate AI-generated writing alongside traditional human-crafted stories.
  • Analyze and contrast the characteristics of print narratives with those of AI-generated content, focusing on storytelling differences, audience engagement, and publication possibilities.
You will learn about how to contextualize digital literature in the spectrum of literature from codex to contemporary online platforms. You will also apply theories of narrative to contemporary online works, critically analyze and explain the relation of text and image (and other modalities) in specific online works, and examine the role of classic texts (Birkets) in today’s modern understanding of digital fiction. 

  1. Wu, J., et al. (2021). "Recursively summarising books with human feedback."
    This groundbreaking study explores the use of AI in summarizing complex literary works. The paper delves into the process where AI algorithms are trained with human feedback to create concise, yet comprehensive summaries of books. It presents a unique intersection of human creativity and AI efficiency, offering insights into how AI could augment human understanding of literature.


  2. Novel AI (2021).
    This website serves as a platform for AI-powered writing tools. It demonstrates the practical applications of AI in creative writing, providing users with AI-assisted tools for crafting narratives. The site exemplifies the blend of AI technology with human creativity, offering a glimpse into the future of storytelling where AI plays a collaborative role.


  3. Benson, D. (2022). "AI in Fiction and the Future of War."
    David Benson's article probes into the use of AI in writing war fiction, examining how AI can generate narratives that speculate on future warfare. The piece highlights the capacity of AI to extend beyond current human imagination, offering new dimensions in fiction writing, particularly in speculative and science fiction genres.


  4. Chubb, J., et al. (2022). "Expert views about missing AI narratives: is there an AI story crisis?"
    This paper delves into the discourse around AI narratives, addressing concerns about potential gaps in AI storytelling. It raises critical questions about the representation and diversity in AI-generated content, examining whether AI can truly capture the breadth and depth of human experiences and storytelling diversity.


  5. Suggested Reading: Howlarium (2022). "What will AI do to reading?"
    This article explores the implications of AI on the future of reading. It discusses how AI might transform readers' experiences, from interactive storytelling to personalized narrative journeys. The piece contemplates the changing landscape of reading in the age of AI, probing the potential shifts in reader engagement and content consumption.

These readings are relevant to the course overall because they provide a foundation for understanding how to use new media technologies to create and share stories, and are essential for you who want to develop their skills and expertise in this field.




Comments

  1. Considering the Benson and Chubb readings were about our misguided fiction towards AI*, the ‘Ten Visions of our Future’ video does not make me feel like fiction has it all wrong. I’m not going to complain about potential medical applications or clean, efficient energy that spares our natural resources- I’m on board for that! What gives me some cause for alarm is the casual way Mr. Lee announces how thoroughly and flawlessly AI will be able to target everyone for specific advertising. That doesn’t feel far from the slightly dystopic, ad-driven world represented in Minority Report (see the attached link for a reference).

    Minority Report Clip: https://youtu.be/7bXJ_obaiYQ?si=kTyNZi27IZX5Hbfi

    It’s interesting how this also coincides with the material many of us are reading in COMM 505 right now. Mr. Lee seems almost delighted at the idea of driverless and automated tech. Those of us in COMM 505 might recognize this as platform capitalism. For all its technological wonder, it’s still a monopoly designed to cut costs by replacing workers and stamping out competitors. Maybe AI isn’t Skynet or Ultron, but at least I know what those two are all about. When we aren’t given a say in how our gathered data is used, and for what intent, it makes it seem a little more nefarious.

    Benson, C. (2022, June 3). AI in Fiction and the Future of War. The Strategy Bridge.
    https://thestrategybridge.org/the-bridge/2022/6/3/ai-in-fiction-and-the-future-of-war

    Center for Natural and Artificial Intelligence. (n.d.) AI 2041: Ten Visions for Our Future [Video]. YouTube. https://www.youtube.com/watch?v=IWTqS2_MfPE

    Chubb, J., Reed, D., & Cowling, P. (2022). Expert views about missing AI narratives: is there an AI story crisis?. AI & society, 1–20. Advance online publication. https://doi.org/10.1007/s00146-022-01548-2

    ReplyDelete
  2. In the article by Benson (2023), the following excerpt caught my attention is: "AI can be a useful tool when applied appropriately and poses real challenges if misused. People need to be able to know when a tool can be used and what problems it might create."

    I find this statement highly relevant and agree with it. Distinguishing between realistic and unrealistic AI expectations is crucial to understanding its true potential and limitations. Some people hold extremely optimistic views about AI, such as the belief that fully autonomous planes will soon become a reality (Benson, 2023). However, I think such advancements might not happen due to the ethical or technological challenges associated with AI. For instance, self-driving cars must be programmed to handle complex accident scenarios (Amos, 2022). In a no-win situation, should the car prioritize saving an elderly pedestrian on the sidewalk or a child running into the street to retrieve a ball? This example underscores Benson’s (2023) point that while AI has potential, it must be applied with caution and a clear understanding of its limitations.

    I believe much of the fear surrounding AI stems from a lack of understanding, which is further exacerbated by overly optimistic and utopian narratives. Fictional portrayals of AI often contribute to unrealistic expectations (Benson, 2023), reinforcing the misconception that AI is either an all-powerful tool or a threat. When people are led to believe that self-driving vehicles and other AI advancements are just around the corner, it can intensify dystopian fears of AI taking over. This is a concern often held by those wary of technological progress (Benson, 2023). This aligns with Chubb et al. (2022), who highlight how AI narratives are often polarized between utopian and dystopian extremes, largely shaped by powerful stakeholders such as big tech companies.

    Big tech companies often promote AI as a transformative technology, emphasizing its potential. This can contribute to public skepticism, as many people perceive these companies as being primarily profit-driven and potentially harmful to society. Such perceptions can also further fuel fears about AI. To bridge the gap in public understanding, Chubb et al. (2022) argue that the public needs balanced narratives that accurately convey AI’s current capabilities and advancements. They propose integrating creative storytelling and art to improve public understanding, which I find a valuable approach. They can use these strategies to reshape the narratives surrounding AI. Fear often arises from a lack of knowledge, and by providing more accessible and engaging narratives, people can develop a clearer and more informed perspective on AI. I believe addressing misconceptions through storytelling and education can help shift the focus from unrealistic fears to a more balanced understanding of AI's actual impact and potential.

    References:
    * Amos, Z. (2024, 15 juni). The Ethical Considerations of Self-Driving Cars. Montreal AI
    Ethics Institute. https://montrealethics.ai/the-ethical-considerations-of-self-driving-
    cars/
    * Benson, D.C. (2022, June 3). AI in Fiction and the Future of War. The Strategy Bridge.
    https://thestrategybridge.org/the-bridge/2022/6/3/ai-in-fiction-and-the-future-of-war
    * Chubb, J., Reed, D., & Cowling, P. (2024). Expert views about missing AI narratives: Is there
    an AI story crisis? AI & Society, 39(1107-1126). https://doi.org/10.1007/s00146-022-01548-2

    ReplyDelete
  3. I appreciate the positive tone of this module’s readings as they examine both how AI can impact, and be impacted by, narratives. While Wu et al. (2021) explore how AI can help readers quickly receive information about various literature especially when guided well by human feedback, Benson (2022) dives into to the various ways that current narratives regarding AI may embellish the potential negative aspects of AI.

    While I am genuinely excited about the potential for AI to do positive things for humans , I did struggle with what felt like downplaying of the potential negative impacts of AI by Benson. While I agree that focusing on dystopian narratives with AI villains in the far off future can seem to distract from the emergent problems of AI (Benson, 2022), I think this argument fails to consider:

    1) Generally, we should be thinking creatively about the potential issues with AI development and implementation with them in mind, regardless of if they seem dramatic, unrealistic, or far-off in the current day. I’m sure if we as a society had fully anticipated the negative impacts of social media that we see coming to full fruition decades later, such as trends toward increasing surveillance (Zuboff, 2019), that there would have been earlier calls that steered social media in a better direction from earlier in its development.

    2)Many narratives that centralize AI as a villain are actually social commentaries that examine pre-existent issues which AI only exacerbates. For example, in the movie the Matrix (Wachowski and Wachowski,1999) it seems like the Matrix (a form of AI) is the issue, but really the film criticizes blind conformity and passive societal acceptance. I think there is merit in considering how AI will extend these social issues and steer development accordingly.

    However, we also have present issues in the current day that should be addressed before we immediately jump to Terminator-style narratives. For example, as Benson (2022) mentions, AI is resource intensive and the environmental impact of AI is a current area we can look at improving as it continues to grow as a tool.

    Sources:

    Benson, David (March 2022). AI in Fiction and the Future of War, The Strategy Bridge.

    Wachowski, L., & Wachowski, L. (Directors). (1999). The Matrix [Film]. Amazon.
    Wu, J., Ouyang, L., Ziegler, D. M., Stiennon, N., Lowe, R., Leike, J., & Christiano, P. (2021). Recursively summarising books with human feedback.

    Zuboff, S. "The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power" PublicAffairs, 2019 CHAPTER 2 - SETTING THE STAGE FOR SURVEILLANCE CAPITALISM

    ReplyDelete
  4. Reading through Module 3, I’m struck by how strongly our perceptions of AI are shaped by fictional narratives. Benson (2022) compellingly argues that exaggerated portrayals of AI—as either villainous or heroically human-like—skew public understanding, distracting us from AI's real-world complexities. While movies like The Terminator embed fears of an AI apocalypse, the genuine risks we face today are more subtle, like algorithmic biases or economic displacement. As Benson (2022) bluntly states, “Good stories...are even less predictive of the future than attempts to predict the future.”

    Chubb et al. (2022) similarly warn of an AI "story crisis," criticizing dominant narratives as polarized extremes that "mislead public understandings and conceptions of AI." They rightly highlight the urgent need for nuanced narratives reflecting everyday AI realities—stories about AI in healthcare, sustainability, or social justice that rarely capture the imagination but profoundly shape our lives. This aligns with Kai-Fu Lee’s observations (as cited in Module 3) that AI excels in data-driven optimization but struggles with abstraction, creativity, and common sense—qualities popular narratives often falsely attribute to AI.

    Wu et al. (2021) demonstrate a more realistic application of AI: human-assisted recursive summarization, achieving coherence through careful human feedback rather than magical machine autonomy. Their transparency about AI's limitations—such as struggles with narrative coherence—grounds AI in reality, underscoring that human insight remains irreplaceable.

    It's clear we urgently need balanced, responsible storytelling about AI. As Chubb et al. (2022) advocate, shifting narratives away from sensationalism toward grounded, responsible portrayals can foster ethical public engagement and informed policymaking. Ultimately, AI’s future isn't predetermined by dystopian fiction but shaped by our present understanding.

    References

    Benson, D. C. (2022, March). AI in fiction and the future of war. The Strategy Bridge. https://thestrategybridge.org/the-bridge/2022/6/3/ai-in-fiction-and-the-future-of-war

    Chubb, J., Reed, D., & Cowling, P. (2022). Expert views about missing AI narratives: Is there an AI story crisis? AI & Society. https://doi.org/10.1007/s00146-022-01548-2

    Wu, J., et al. (2021). Recursively summarising books with human feedback. arXiv preprint. https://arxiv.org/abs/2109.10862

    ReplyDelete
  5. “What is missing are narratives and stories about what people might want or hope for from AI in their everyday life; these niche stories are often suppressed because they are less sensational, and hence less valuable in capturing attention and advertising revenue” (Chubb et al. 1108). By reading the article “Expert views about missing AI narratives: is there an AI story crisis?” made me realize how much our understanding of artificial intelligence is shaped by narrow, extreme narratives—either dystopian fears or overly optimistic promises. We need more stories that are close to life and include diverse voices so that the public can have a more comprehensive and realistic understanding of the development status and potential of AI technology, rather than being led by exaggerated imagination.

    Meanwhile, Benson also mentioned that: "Fictional AI portrayals rarely explain how AI develops. Short-cutting the development process gives the impression that programs can go from lines of code to sentience in moments. Training AI models is complex, involved, and―most importantly―resource-intensive. Knowing constraints on AI are strategically important and should reassure the fearful while tempering optimistic expectations" (Benson, 2022). I deeply learned that the exaggerated descriptions of artificial intelligence in many stories may be very misleading. They often skip the long and complex process of developing artificial intelligence. However, in reality, training an artificial intelligence model requires time, expertise, and a large amount of resources. Therefore, we should be more rational and critical when facing discussions about AI. Understanding the real limits of AI development will not only help alleviate the public's fear of the threat of AI but also prevent us from harboring unrealistic fantasies about its capabilities, thus promoting more responsible and objective technology application and policymaking.


    References:

    Benson, D. C. (2022, June 3). AI in Fiction and the Future of War. The Strategy Bridge. https://thestrategybridge.org/the-bridge/2022/6/3/ai-in-fiction-and-the-future-of-war

    Chubb, J., Reed, D. & Cowling, P.(2022). Expert views about missing AI narratives: is there an AI story crisis?. AI & Soc. https://doi-org.login.ezproxy.library.ualberta.ca/10.1007/s00146-022-01548-2 or https://rdcu.be/c0KwN

    ReplyDelete
  6. I found module 3 presents a fascinating exploration of artificial intelligence as both a tool and a collaborator in the storytelling process. The comparison between AI-generated narratives and traditional human-authored stories raises important questions not only about creativity but also about narrative authenticity, ethics, and representation.

    One of the most thought-provoking readings is Chubb et al. (2022), which points out the lack of diversity and emotional nuance in AI-generated stories. While models like GPT can simulate language convincingly, their narratives are built on existing data and thus risk reproducing biases and gaps in cultural representation. This critique is echoed in Howlarium (2022), which questions whether AI can truly innovate in literature or merely replicate existing trends.

    Benson (2022) offers a compelling look at how AI-generated speculative fiction particularly around future warfare can stretch the limits of human imagination, though he warns against narratives that lack ethical depth. Meanwhile, Wu et al. (2021) show the productive side of AI when trained with human feedback, illustrating a hybrid model of storytelling.

    An insightful addition to this discussion is Roose (2023), who explores how large language models like ChatGPT are being perceived as “co-authors” rather than tools. Roose raises ethical concerns around AI’s influence in journalism and fiction, especially as AI’s narrative voice becomes indistinguishable from that of a human. This development challenges our definitions of authorship, originality, and creative responsibility.

    Ultimately, AI is not replacing human storytelling, it is reshaping it. As we navigate these new forms, it becomes crucial to preserve human emotional insight and cultural intentionality in narrative-making.

    ReplyDelete
    Replies
    1. Reference

      Benson, D. (2022, March). AI in Fiction and the Future of War. The Strategy Bridge. https://thestrategybridge.org/the-bridge/2022/6/3/ai-in-fiction-and-the-future-of-war

      Chubb, J., Cowls, J., & Simpson, T. (2022). Expert views about missing AI narratives: Is there an AI story crisis? AI & Society, 37, 201–210. https://doi.org/10.1007/s00146-021-01251-w

      Howlarium. (2022). What will AI do to reading? https://howlarium.com

      Novel AI. (2021, December 18). AI-assisted storytelling platform. https://novelai.net

      Roose, K. (2023, March 16). Who wrote this? How AI is reshaping authorship. The New York Times. https://www.nytimes.com/2023/03/16/technology/ai-writing-chatgpt-authorship.html

      Wu, J., et al. (2021). Recursively summarising books with human feedback. arXiv. https://arxiv.org/abs/2108.02274

      Delete

Post a Comment