Crafting My Digital Self: Reflecting on Identity, Ethics, and Creativity in the Digital Mirror
By Ruolan Lin
1. The Journey: Creating Myself, Digitally
Recently, my father sent me a Ghibli photo generated by AI based on my own photo. I was pleasantly surprised. I messaged him: "You did this?!" with shocking memes. And I just recalled the process of creating my selfies generated by AI. To be honest, I have seen others try to use AI to generate pictures online, but this was the first time I have tried to generate digital selfies of myself through AI tools. At first, I was really confused and entangled about which AI tools I should use to generate my selfies. Because there are so many AI tools on the internet nowadays, in the end, I compared them and the only AI tool I was relatively familiar with was DALL.E. Moreover, DALL-E’s text-to-image function can accurately handle input variables (e.g., personal characteristics, language, cultural markers). So I ultimately chose to use DALL.E to complete this selfie exploration.

By using DALL.E, I generated four AI selfies by changing the prompts slightly—from basic identity traits to deeply personal details, from English to Mandarin, from a global student look to a Chinese urban look. At first, it felt thrilling, like I had full control over my digital representation. However, there were definitely some roadblocks along the way. These versions were visually just stereotypes collected and generated by DALL.E based on the text descriptions I provided. I felt as if my photos were just imitations of the mainstream aesthetics collected by the AI tool, rather than expressing authenticity.
2. AI as Curator of the Digital Self
Selfies, as scholars like Gunthert (2015) and Tifentale (2018) remind us, are more than vanity—they’re cultural performances. But what happens when we outsource that performance to AI? Creating these images prompted me to think: Who am I in the digital space? Can I make good use of AI tools to generate my selfie image? Can AI tools fully understand what I am saying? Or does AI's understanding make me a completely different person? It turns out that the selfie image generated by AI tools is still relatively one-sided. Raigoso launched a comprehensive critique of AI-generated imagery, arguing that AI tools are not neutral but reflect historical and societal biases (2023). It cannot particularly deeply show the diversity of a person in real life. I favor black hoodies, no make-up, and minimalism in real life. But the AI-generated portraits told another story: subtly airbrushed skin, soft pink blush, perfect symmetry, and even bangs I never asked for. It wasn’t offensive, but it wasn't me either. My AI selfies didn’t just reflect my input, they reflected what the system thinks people want to see. In this sense, my digital self became a platformized version of me. It is shaped less by who I am, and more by what the algorithm has learned is socially acceptable.

3. Ethics and Cultural Dimensions
My description of “Chinese urban fashion” spawned images full of traditional elements, such as pagodas, red and gold elements, and traditional Chinese clothing. But this is not a real Chinese urban scene. This is not just an aesthetic mistake, but also a manifestation of the algorithm’s cultural stereotypes and lags, that is, the AI tool has not kept up with the times to understand what I am describing in modern China. As Stevenson (2018) reminds us, early visions of the web were driven by dreams of cultural connectivity. Yet, AI’s training data often reflects Eurocentric or Westernized norms, leading to reductive representations of non-Western cultures. In this way, AI doesn’t necessarily create new stereotypes, but it repackages old ones in digital form. At the same time, I wonder if the descriptive language I use unintentionally reinforces stereotypes? Do I choose certain visual features because they reflect me, or because they fit social expectations of attractiveness? As digital citizens, we must be aware of how our statements are interpreted, reused, or misused. Ethically, I also considered visibility. The more “true” and revealing my representation became, the more exposed I felt. In a digital world where surveillance is often invisible but constant, self-representation becomes a form of negotiated vulnerability. Meanwhile, the potential danger is that as these images become more refined and receive more praise or “likes,” they begin to shape our perception of ourselves. The more we accept our AI-optimized self as “better,” the more we begin to emulate it. In this way, digital identity becomes a cycle. AI predicts what we should look like, we begin to emulate it, and then future AI is trained based on that emulation.
4. Translating Theory to Blog
Transforming my academic study on AI-generated selfies into a blog post wasn’t just about shortening paragraphs or simplifying vocabulary. It was a practice in transliteracy, which, as Sue Thomas et al. define it, is “the ability to read, write and interact across a range of platforms, tools and media”. In my original academic paper, I explored AI selfies through a structured approach, literature review, and detailed comparative analysis. The tone of the paper was formal, the language was refined, and it focused mainly on critical depth. But when adapting it into a blog post suitable for online reading, I had to make some key changes: Firstly, I made my tone and text more natural and vivid so that readers would not feel alienated from my blog and would not make the audience feel serious and stressed. Secondly, I added photos like memes as an aid to help me better convey the feelings I wanted to express through this blog. This shift exemplifies Marshall McLuhan’s famous idea: “The medium is the message.” The form in which we communicate shapes how that message is interpreted. By reframing my research in this way, I not only made it more approachable but also more alive, and more shareable.
References:
Gunthert, A. (2015). The consecration of the selfie. A cultural history. Études photographiques, (32).
McLuhan, M. (2017). The medium is the message. In Communication theory (pp. 390-402). Routledge.
RAIGOSO RAIGOSO, N. I. C. O. L. Á. S. (2023). Behind the screen: biases and stereotypes in dall-e AI-generated images.
Stevenson, M. (2018), From Hypertext to Hype and Back Again: Exploring the Roots of Social Media in the Early Web https://hcommons.org/deposits/item/hc:16611/
Tifentale, A. (2018). The selfie: more and less than a self-portrait. Routledge Companion to Photography and Visual Culture, 44-58.
Transliterate Spaces - Sue Thomas - 3Ts 2013: Transliteracy from Cradle to Career, Sue Thomas, 2013. http://www.slideshare.net/suethomas/transliterate-spaces-sue-thomas-3ts-2013-transliteracy-from-cradle-to-career?qid=8250bddc-9e00-45fd-917e-526d05482ce2&v=default&b=&from_search=1
Comments
Post a Comment