Peris Jones - Creating Myself Through AI: When a Selfie Becomes a Story

I started this assignment thinking I would simply generate a selfie that looked like me. Instead, I ended up learning how much of my identity, especially online, is constructed, negotiated, and sometimes even decided for me. 

Generating the “First Version” of Me 



I used DALL·E 3 to create my digital self-representation, starting with a prompt focused on my physical features: my hair, eyes, smile, and general appearance. I described a 23-year-old woman with shoulder-length dirty blonde wavy hair, green eyes, rosy cheeks, and minimal makeup. The result looked polished and conventionally attractive — but it didn’t fully feel like me. 

The image was too perfect. The lighting was soft, the skin was flawless, and the expression felt almost staged. It made me realize that even when I tried to be realistic, DALL·E 3 defaulted to a highly idealized version of a woman. At this stage, the selfie felt more like a generic template than a personal representation. 

Adding Identity Beyond Appearance

 

In my second and third prompts, I shifted my focus from just appearance to identity. I added details about my environment (a cafe), my interests (reading), and aspects of who I am (a student, writer, and journalist). This is where the image started to feel more “recognizable.” The addition of books wasn’t just aesthetic, it communicated something about how I see myself. As I have learned throughout this course, selfies are a way of communicating identity, and these details became a way of signalling my values and personality. 

However, something interesting happened: even when I added complex identity traits like “compassionate,” “generous,” or “journalist,” DALL·E 3 translated these into visual cues, such as warm lighting and soft expressions. It showed me that the model doesn’t understand abstract identity; it reduces it to recognizable visual patterns. 

Negotiating Control and Authenticity 

 

As I refined my prompts, I tried to make the image feel more natural and less staged. I asked for a “more neutral and natural expression” and later “more candid, less polished.” This is where I ran into one of the biggest challenges: control. 

Even when I explicitly asked for a less polished image, the results still leaned toward perfection. The skin remained smooth, the lighting stayed cinematic, and the overall aesthetic still felt curated. DALL·E 3 resisted imperfection. This revealed something important: digital tools don’t just reflect identity — they shape it. The version of me that the AI produced was constrained by what it “understands” a desirable or realistic image to be. 

Identity as Performance 

Looking at my final images, I realized that my digital self is not identical to my real-world self. It is more consistent, more aesthetically pleasing, and more intentional. In real life, identity is fluid and sometimes contradictory. But in these images, everything aligns neatly: I am a student, sitting in a cafe, surrounded by books, presented in warm lighting, looking thoughtful and approachable. 

This is where the idea of identity as performance becomes clear. Even though I was trying to be authentic, I was still making choices based on how I wanted to be perceived. The selfie becomes less about capturing reality and more about constructing a version of myself for an audience. 

Bias in Generative AI

Another key realization was how strongly DALL·E 3 leaned into specific beauty standards. Across all versions, the images consistently featured smooth skin, symmetrical features, and a soft, feminine aesthetic. Even when I tried to make the image more candid or natural, these elements remained. 

Additionally, when I first generated an image, I did not specify my skin tone. DALL·E 3 assumed that I was white, potentially based on other features that I listed. This reflects broader concerns about AI bias: the model is trained on datasets that prioritize certain types of beauty and identity, often from a Western perspective. As a result, the “self” that is generated is not entirely self-defined. It is shaped by existing cultural norms embedded within the technology. 

Translating This Into a Blog 

Writing this as a blog instead of a formal essay changed how I approached my analysis. Instead of using dense academic language, I focused on telling the story of my process: what I tried, what worked, and what surprised me. 

This is an example of transliteracy: adapting ideas across different forms of communication. In this format, my analysis becomes more personal and accessible, allowing my voice to come through more clearly. It also reflects Marshall McLuhan’s idea that “the medium is the message.” The blog format encourages a controversial tone, making the analysis feel more engaging and relatable than a transitional academic paper. 

Final Reflection

What started as a simple attempt to generate a selfie turned into a deeper exploration of identity, technology, and representation. My AI-generated selfie doesn’t fully capture who I am — but it reveals something just as important: how identity is shaped not only by us, but by the tools we use to represent ourselves. Maybe the real selfie is not just who we are, but how we are seen. 

Comments