Module 3, AI Narratives— Gabby Keiran

My first encounter with AI generated media was roughly a year or so ago, when artists that I have admired and followed for years on Instagram began posting updates on a lawsuit that they were filing against Stability AI. I was previously unaware that this level of generative processing power was available to the public, and, in my curiosity, dove further into the realm of AI-generated art, its possibilities, and its detriments to the creative community as a whole. 


I am quite fortunate to have enrolled in COMM 555 so near to my discovery of this technology, as I feel I am beginning to grow my perspective on these technologies and in doing so have a more balanced view on those who take advantage of its capabilities. Our third reading in particular lead me to further dissect my own perspectives, and the strictness to which I adhered in thinking only of the unethical aspects of such technology. In the weeks since, I have been exploring Chat GPT and its uses in academic and professional contexts, as I utilize it to aid in my job application process and as a resource to point me in directions of research for my studies. 

 

In my exploration, I have come to broaden my perspective to such technology, and I believe I have come away with a true understanding that “AI can be a useful tool, when applied appropriately, and poses real challenges if misused” (Benson, 2022). 

 

For this assignment, I integrated aspects of Module 3’s readings, expounding on their ideas and playing with an uncanny notion of consciousness in ChatGPT. In the past, I have utilized this tool in my research to create abstracts of sections of papers, to see if they can be integrated into my work. I began the process of narrative creation in the same way, copying three paragraphs of David Benson’s work from “AI in Fiction and the Future of War”, as I found this source to be the most intriguing. I drew parallels from my own experiences with AI to our additional reading for this week, Andrew Cox’s “AI and robots in Higher Education: Eighteen design fictions”. The way the Cox’s narratives expanded on the uses of text generative tools like ChatGPT to be utilized in research contexts made me examine the extent to which I would also be lured into an AI-centred learning system, so I decided to focus my own prompt around this setting as well.

 

My first prompt is as follows; 

 

Create a fictional narrative based on the following text, integrate your own opinion as an AI robot, reference the writing style of Ursula K Leguin, keep it less than 400 words.

 

I chose to integrate the writing style of Ursula K Leguin, as I felt that many significant female fantasy/sci-fi authors were omitted from the discourse in all of the sources I referenced for this exercise. In many sci-if groups, Leguin is often overlooked as a contributor to the genre, but I believe that her focus on an omnipotent, primal evil created by man in her stories served as inspiration for many technologically-focused fictions, while still heavily incorporating the notion of a grey area between good and evil. The text I used for the foundation of the narrative was taken directly from Benson’s research, as I copy/pasted the sections “AI As A Villain”, “AI As A Hero”, and “Both Portrayals Create Problems” following my prompt. I thought these sections would serve as an interesting background for the computer’s background learning, and wanted to see what “opinions” would be formed on this research by the subject of its critique.

 

The resulting output is as follows;






ChatGPT’s first creation was a rather hollow, lengthy narrative that focused too heavily on Benson’s source material, and I was unimpressed with its borderline plagiarism and lack of position on the topic. I therefore adjusted this narrative, keeping in mind the balanced perspective that I hoped to achieve, avoiding the “polarization” that can occur in AI-themed narratives (Chubb & Cowling, 2022). I also wanted to continue this mimicry of my own research process, playing with the notion of utilizing the AI program as both the teacher and the student, as it learnt from its previous conception to build upon it.

 

With these intentions in mind, I wrote my second prompt;

 

Use the previous narrative as a background foundation for a new narrative, featuring an AI robot who works as the primary educator for children ages 5-17, suggest the pros and cons of using an AI in this context and integrate warning elements within the fiction. Keep it less than 300 words.

 

The resulting product was as follows;





Intriguingly, the resulting product of this prompt seemed to mimic some of the narratives in Cox’s design fictions, bringing to mind the key questions posed in his abstract. The sense of dread, and yet aftertaste of opportunism, served as evidence to the ways in which “our fears and hopes about the future often manifest themselves in science fiction” (Benson, 2022). I was intrigued by this unsettling dichotomy, as this fiction was not created by the hands of a human, but instead by a machine. It would be far too easy to read further into ChatGPT’s creation, raising an eyebrow at the verbiage it chose, “molding young minds”, and the way that a robot glorified a creation which could be its future, suggesting that one day, AI such as itself may “surpass human capabilities” (ChatGPT, 2024). 


As a matter of personal taste, I felt that some of its writing choices were a bit cliché and rudimentary, down to its choice of names for setting— “Cogsville” was a bit too on the nose for my preference. I had the software create another iteration of that same narrative, which immediately seemed elevated in its language. The descriptive language and tone mimicked the sci-fi stores Benson spoke of in his own work, and held an eery similarity to the way a human author may develop in their own writing style through experience and growth. 

 

This final iteration read as follows;




 

As I write my analysis of these fictions, taking into account the sources which served as the foundation for their creation, I am reminded of the fact that AI in its current form is not as advanced as stories depict it to be (Benson, 2022). Yet, as it developed and advanced its language through iterations, it is clear to see that the speed at which AI is progressing in its capabilities will see significant changes in its uses in society, (Lee, 2021), and likewise the potential dangers it could pose to us as humans— whose brains learn and process information at what may soon be a much lesser capacity than computers.



Sources


Wu, J., Ouyang, L., Ziegler, D. M., Stiennon, N., Lowe, R., Leike, J., & Christiano, P. (2021). Recursively summarizing books with human feedback. arXiv preprint arXiv:2109.10862.


Benson, David (March 2022). AI in Fiction and the Future of War, The Strategy Bridge. Retrieved from: https://thestrategybridge.org/the-bridge/2022/6/3/ai-in-fiction-and-the-future-of-war



Chubb, J., Reed, D. & Cowling, P.(2022). Expert views about missing AI narratives: is there an AI story crisis?. AI & Soc. https://doi-org.login.ezproxy.library.ualberta.ca/10.1007/s00146-022-01548-2 or https://rdcu.be/c0KwN



Lee, K.-F. (2022, February 17). AI 2041: Ten visions for our future. YouTube. https://www.youtube.com/watch?v=IWTqS2_MfPE 


Open AI. CHATGPT: Get instant answers, find inspiration, learn something new. https://chat.openai.com/c/74944978-9370-4247-96a2-1c8617110de2. (n.d.). https://chat.openai.com/ 

 


Comments