Exploring Identity Through Ai-Generated Selfies: Digital Selfie Creation - Katie Alexander

 



    Using AI to create this selfie was a challenging experience. At first, when I would input all of my information, it would just give me generic women who looked nothing like me. As I became more detailed, it would change one thing to resemble my face, but alter other things to be even farther from what I actually look like. This picture and I have the essence of the same person, but it is definitely not me in the image. It was interesting to see how the random facts I provided in the prompt affected the image, when, in reality, these things would have had nothing to do with altering the appearance. I included the fact that I was a university student, which is where the background comes from, and that I worked at Lululemon, which gave me a more athletic look and put me in an Under Armour top. 

I do not think that this image is far off from my own self-perception. It looks like it could be a relative of mine in another life. It captured that I am active and happy, but it could never capture the other undertones that physical description alone cannot express. The reading by Chubb, J. et al. (2022) mentions the idea of dominant narratives in AI storytelling, and I feel that this could feed into AI selfies, with how people describe themselves and dominant characteristics take over the image, even if it is not true to the prompt. Generic things like smiling without showing teeth or long, full lashes look different on everyone, but can only be interpreted in so many ways, affecting the output overall.

Comments

  1. Hi, Katie! Thank you for sharing your AI-generated selfie and reflection. I really appreciated how honestly you described the frustration and uncertainty you experienced while trying to get the AI to represent you accurately. Your observation that the image captured the “essence” of you without actually being you was especially compelling, as it highlights the gap between self-perception and algorithmic interpretation.

    I found your discussion of prompts particularly interesting, especially the way non-visual information—such as being a university student or working at Lululemon—directly influenced the final image. This demonstrates how AI systems translate social and cultural markers into visual cues, even when those markers would not normally determine physical appearance. It raises important questions about how identity is simplified or stereotyped through AI-generated representation, rather than understood as complex or contextual.

    Your connection to the idea of dominant narratives in AI storytelling, as discussed by Chubb et al. (2022), adds strong critical depth to your reflection. The point you make about generic descriptors—such as smiling or having long lashes—being interpreted in limited ways really emphasizes how AI relies on generalized visual norms. This also connects well to broader discussions in the course about representation and whose characteristics become normalized through digital systems.

    One question I had while reading your post is whether this experience changed how you think about self-representation on social media more broadly. After seeing how AI reshapes identity through prompts and assumptions, do you feel more aware of how platforms may similarly frame or flatten personal identity? Overall, your post offers a thoughtful and nuanced reflection on the limits of AI self-representation and encourages deeper consideration of how identity is mediated by technology.

    ReplyDelete
  2. Hi Katie — I really liked your post. The way you described the AI “fixing” one feature to match you while making other parts even less accurate felt super real. That’s such a specific kind of frustration, because it shows the AI isn’t actually seeing you — it’s trying to assemble a believable person based on patterns. Your line about it feeling like a “relative in another life” captured that perfectly: close enough to recognize the vibe, but still not you.

    I also thought your examples were really strong, especially the Lululemon → Under Armour shift. It’s kind of wild that non-visual details (where you work, being a student) end up shaping the whole aesthetic, even when they shouldn’t logically change your face. It made me think about how AI fills in gaps with stereotypes or “default” assumptions — like it grabs a few keywords and builds a whole character around them. In a way, it’s not just generating a selfie; it’s generating a narrative version of you that fits what the system expects.

    Something I’m curious about: did you ever try prompts that were negative or restrictive (like “no athletic brand/logo,” “no glamour look,” “no beauty filter,” “no model pose”)? I wonder if the AI still pushes people toward the same polished, “marketable” look even when you try to resist it. Also, after seeing how easily the AI turns random details into visual cues, did it change how you think about regular social media selfies — like how platforms and trends subtly encourage certain “types” of faces and vibes? Your reflection made me think way more about how identity gets simplified when it has to be translated into images.

    ReplyDelete

Post a Comment