When AI Interpreted My Selfie Back to Me

 By Carolin Fu

AI-generated painted portrait with horizontal glitch-like distortions across the face
AI-generated portrait selected for analysis

When I first used DALL-E for this assignment, I thought it would be a simple visual exercise. I expected to generate an interesting image and move on. That did not happen. While writing the prompt, I deliberately avoided gender markers and detailed physical descriptions because I wanted the image to express an emotional state rather than a realistic likeness. Even with that choice, the system produced a portrait with a face that read as male. At that point, the tool stopped feeling neutral. It felt like it was interpreting my ambiguity and turning it into a socially recognizable identity. 


That shift became the real subject of this project. What I ended up analyzing was not just one AI-generated portrait, but the way text-to-image generation turns selfhood into a negotiation between what I intend, what the platform allows, and what the model fills in for me. In that sense, the selfie is less a mirror of who I am and more a site where identity is assembled through human intention, platform affordances, and algorithmic patterning. 


How I Made the Image

I approached the image through iterative prompting rather than trying to make it look like a realistic version of me. I kept the same general genre—a painterly portrait—but changed how much identity information I provided each time. My first attempt focused on emotional ambiguity and introspection, with no gender markers or detailed facial description. When that image came back with a face that read as male, that result became the central tension of the project. 


I then revised the prompt several times to test ambiguity more directly, including whether the system could maintain a less fixed identity when I asked for androgyny, cultural openness, or a refusal of traditional gender norms. In the end, I selected the version that made the system’s interpretive habits most visible. 


Prompting Process

Attempt 1: abstract mood portrait focused on emotional ambiguity and introspection
Attempt 2: explicitly androgynous features with the same painterly style
Attempt 3: a more culturally open portrait without a specific cultural look
Attempt 4: a figure that did not follow traditional gender norms
Attempt 5: the version selected for analysis because it made the system’s interpretive habits most visible


What the Image Showed Me

In an AI selfie, meaning starts forming before the image even exists. The prompt is not just a command. It is my way of telling the system what kind of image I want and what kind of feeling it should carry. By focusing on an emotional state instead of realistic likeness, I was trying to move the selfie away from a face-as-proof genre and toward something more expressive. 


What the system gave back suggests that portraiture comes with built-in expectations. Even when I avoided identity labels, the output still drifted toward familiar gender cues. That changed how I understood authorship. The image no longer felt like pure self-expression. It felt more like evidence of how I was being interpreted. In that sense, the selfie is not simply a mirror of who I am. It is a place where a version of self gets assembled through my prompt and the model’s learned habits. 


Why Context Matters

In this project, context matters almost as much as the image itself. My AI selfie first existed as a class artifact submitted on eClass, where the audience was limited and the purpose was mainly academic. In that setting, the image worked as evidence for an argument about how AI systems fill in identity. Once I place the same image in a blog, though, the context changes. It becomes easier to share, easier to link, and easier to encounter without the background of the assignment. That shift matters because it changes how the image can be read and how easily it can be misunderstood.


Publishing also changes the social life of the image. A blog is not just a neutral container. Once the portrait appears there, it sits within a more public digital environment shaped by visibility, circulation, and audience interpretation. That means the image is no longer only a classroom example. It becomes part of a wider visual culture in which identity cues can carry different meanings and different risks. 


Transliteracy Reflection

When I turned this project from a critical analysis into a blog post, I had to change not only the format but also the way the argument moved. In the paper, I developed the analysis through formal sections such as methodology, discourse, location, and ethics. In the blog version, I kept the same core argument, but I made it more readable for an online audience by breaking it into shorter sections, using a more direct first-person voice, and placing the image near the beginning so that readers encounter the artifact alongside the analysis. That shift made me more aware that meaning is shaped not only by content, but also by presentation. 

This change also affected how I understood the image itself. In the paper, the portrait functioned mainly as evidence within an academic argument. In the blog, it becomes more immediate and more public-facing. Readers see it before they fully enter the analysis, which changes how the argument is received. This is also where I see a connection to McLuhan’s idea that the medium is the message. Moving the project into a blog did not simply relocate the same analysis into a different container. The blog format changed how the image is encountered, what stands out first, and how identity becomes legible online.

Why the Blog Format Changes the Message

The blog format changes the message because it changes both structure and circulation. In the paper, the image was framed by academic conventions and read within a formal critical argument. In the blog, the portrait appears much earlier and in a more visually immediate way. That means readers encounter it not only as evidence, but also as a public-facing image that can shape their first impression before they move through the rest of the analysis.


The blog format also changes the message because it changes context. In my paper, I argued that once the same selfie is published on a blog, it becomes more searchable, more linkable, and easier to encounter without the original assignment background. That makes the image easier to share, but also easier to misunderstand. In that sense, the blog does not simply hold the analysis. It reshapes how the analysis is read by making visibility, circulation, and possible misreading part of the message itself.

Why This Image Raises Ethical Questions

The ethical issue raised by this image is not just that it does not feel fully accurate. The bigger issue is what the system does when I try to leave identity open. Instead of staying neutral, it repeatedly produced a face that leaned male. In my case, leaving out gender did not remove gender from the image. It invited the system to supply it. That is what made the portrait useful to analyze. It made visible the way AI can turn ambiguity into something more fixed and socially readable. 


There is also an ethical question in how I present the image once it has already been pushed toward a gendered reading. I do not think the responsible move is to treat it as a transparent statement of my identity. A better approach is to present it as evidence of how the system handles ambiguity. Framing it that way matters because it keeps the focus on representation, interpretation, and platform logic, rather than asking viewers to treat the portrait as a simple reflection of who I am.


References

Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza, D., Hashimoto, T., Jurafsky, D., Zou, J., & Caliskan, A. (2023). Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’23) (pp. 1493–1504). Association for Computing Machinery. https://doi.org/10.1145/3593013.3594095

Cotter, K. (2019). Playing the visibility game: How digital influencers and algorithms negotiate influence on Instagram. New Media & Society, 21(4), 895–913. https://doi.org/10.1177/1461444818815684

Dezuanni, M., Reddan, B., Rutherford, L., & Schoonens, A. (2022). Selfies and shelfies on #bookstagram and #booktok – Social media and the mediation of Australian teen reading. Learning, Media and Technology, 47(3), 355–372. https://doi.org/10.1080/17439884.2022.2068575
Duffy, B. E., & Meisner, C. (2023). Platform governance at the margins: Social media creators’ experiences with algorithmic (in)visibility. Media, Culture & Society, 45(2), 285–304. https://doi.org/10.1177/01634437221111923
Gee, J. P. (1999). An introduction to discourse analysis: Theory and method. Routledge.
Lazard, L., & Capdevila, R. (2021). She’s so vain? A Q study of selfies and the curation of an online self. New Media & Society, 23(6), 1642–1659. https://doi.org/10.1177/1461444820919335
Liu, F., Ford, D., Parnin, C., & Dabbish, L. (2018). Selfies as social movements: Influences on participation and perceived impact on stereotypes. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW), Article 72. https://doi.org/10.1145/3134707
Manovich, L. (2001). The language of new media. MIT Press.
Naik, R., & Nushi, B. (2023). Social biases through the text-to-image generation lens. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’23). Association for Computing Machinery. https://doi.org/10.1145/3600211.3604711
Sun, L., Wei, M., Sun, Y., Suh, Y. J., Shen, L., & Yang, S. (2024). Smiling women pitching down: Auditing representational and presentational gender biases in image-generative AI. Journal of Computer-Mediated Communication, 29(1), zmad045. https://doi.org/10.1093/jcmc/zmad045
Tiidenberg, K. (2018). Visibly ageing femininities: Women’s visual discourses of being over-40 and over-50 on Instagram. Feminist Media Studies, 18(1), 61–76. https://doi.org/10.1080/14680777.2018.1409988


Comments