I always get a kick out of customizing game characters – picking the face shape, skin color, and outfit, as I’ve totally decided that avatar is me. But when it came to using generative AI, with its seemingly infinite possibilities, to create my selfie, it was the total opposite. I typed my description into DALL·E, hoping for a realistic portrait of me. The AI decided I should be a dude (Image 1) for the first attempt. Even after more precise details, the bizarre results showed it still had no clue for some certain descriptions I was talking about. I noticed a pattern. Without super-specific prompts, the AI tended to default to certain stereotypes. When I left gender open, I got a man. When I specified female, I got a Western face. When I specified Chinese, I got flawless Instagram-ready “Chinese” women (Image 2&3).
😤😩Why is it so hard to find “myself” in all this “endless” creative power of the digital world?
This question is also one that I have come to understand and continue to reflect on throughout this class. Now, I have found part of the answer.
There are always warnings that algorithmic systems and mainstream AI often reflect the values and assumptions of their creators and training data, which may be biased and manipulated (Chubb et al., 2021; Fister, 2018). I first learned how gender-based bias creeps into AI. As Joy Buolamwini and Timnit Gebru (2018) famously showed some AI systems struggle with accuracy on darker-skinned and female faces, mostly due to training data skewed toward white male images. In my case, it is reflected by initially “seeing” me as a man with more real skin and later giving me women with a beauty makeover I never asked for. It implies the encoded gender norms based on the male-dominated tech industry’s perspectives, reinforcing stereotypes about femininity and beauty (UNESCO, 2019).
Meanwhile, the racialized aesthetics were impossible to ignore. I had to specify “Chinese” and other features repeatedly to get something close to my real-life ethnicity, yet it still fell short of a round face or peach-blossom eyes. This difficulty indicates that AI trained largely on Western data erases or distorts non-Western traits (Buolamwini & Gebru, 2018), which reflects “The Whiteness of AI” (Cave & Dihal, 2020), where Eurocentric norms dominate. Excluding non-Western identities can lead to digital colonialism, reinforcing Western-centric hierarchies that overlook or misinterpret non-Western features (Mohamed et al., 2020).
💭🙀Aren’t these new filters? It’s a strange new awareness compared with the old selfie filters of dog ears or flower crowns.
By this point, these heavy topics, algorithmic bias, racialized beauty norms, cultural erasure, and even gender dynamics in tech, started as play highlighted a truth: even our selfies are political and cultural objects. It made concrete what Joy Buolamwini warned after uncovering bias in facial recognition – that we risk “perpetuating inequality in the guise of machine neutrality” (Hardesty, 2018) if we aren’t paying attention.
🙅I then start wondering: what about users who don’t know they can push these filters back? Or those who internalize these AI outputs as aspirational?
It makes me return to the importance of transliteracy. Transliteracy is “the ability to read, write, and interact across a range of platforms, tools, and media,” emphasizing a critical awareness of our engagements across digital, print, audiovisual, and cultural domains (Thomas et al., 2014). In the digital age, transliteracy calls not for passive consumption of media outputs but rather for a proactive negotiation of meanings across diverse media forms, helping us understand and challenge embedded stereotypes and biases.
With transliteracy, we see the selfie not just as a static, isolated image but as a complex narrative negotiated across digital platforms, algorithmic processes, and our own identities. It encourages users to critically interpret how identities are digitally mediated and manipulated, prompting them to actively engage rather than passively accept algorithmic representations. This multilayered interaction means recognizing algorithmic biases while creatively using technological tools to reclaim and redefine self-representation on our terms. Another term, technobiophilia, a related concept also introduced by Sue Thomas, further enriches this transliterate approach. It highlights our innate tendency to integrate technology with nature, aligning digital practices with a broader humanistic engagement (Thomas, 2013). Such interdisciplinary awareness positions us to better grasp the hybridized space of our online and offline identities, allowing us to thoughtfully bridge digital creation with real-world implications.
Prompting by this class, I feel that writing about this in a casual blog post is also part of the journey. It’s an exercise in transliteracy, moving between scholarly analysis and accessible storytelling. I feel that as I switch from a formal essay to a conversational blog. The medium (a blog, with its informal vibe) lets me be more candid and personal about what I felt seeing that AI image💞.
References
ReplyDeleteBuolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81, 77–91. http://proceedings.mlr.press/v81/buolamwini18a.html
Cave, S., & Dihal, K. (2020). The whiteness of AI. Philosophy & Technology, 33(4), 685–703. https://doi.org/10.1007/s13347-020-00415-6
Chubb, J., Cowling, P., & Reed, D. (2021). Speeding up to keep up: Exploring the use of AI in the research process. AI & Society, 36(3), 999–1017. https://doi.org/10.1007/s00146-020-01040-2
Fister, B. (2018). “Catching Up with Safiya Noble’s Algorithms of Oppression.” https://www.insidehighered.com/blogs/library-babel-fish/catching-safiya-noble%E2%80%99s-algorithms-oppression-1#:~:text=,86.
Hardesty, L. (2018, February 11). Study finds gender and skin-type bias in commercial artificial-intelligence systems. MIT News. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
Liu, Fannie, Ford, D., Parnin, C. & Dabbish, L. (2018). Selfies as Social Movements: Influences on Participation and Perceived Impact on Stereotypes. Proceedings of the ACM on Human-Computer Interaction. 1. 10.1145/3134707. Retrieved from: https://www.researchgate.net/publication/319650312_Selfies_as_Social_Movements_Influences_on_Participation_and_Perceived_Impact_on_Stereotypes
Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659–684. https://doi.org/10.1007/s13347-020-00405-8
Thomas, S. (2013). Technobiophilia: Nature and Cyberspace. Bloomsbury Academic.
Thomas, S. (2014). Next nature: “Nature caused by people.” Journal of Professional Communication, 3(2), 33–38. https://doi.org/10.15173/jpc.v3i2.155
Thomas, S., Joseph, C., Laccetti, J., Mason, B., Mills, S., Perril, S., & Pullinger, K. (2007). Transliteracy: Crossing divides. First Monday, 12(12). https://doi.org/10.5210/fm.v12i12.2060
UNESCO. (2019). I’d blush if I could: Closing gender divides in digital skills through education. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000367416.locale=en