By Carolin Fu
AI-generated portrait selected for analysis |
When I first used DALL-E for this assignment, I thought it would be a simple visual exercise. I expected to generate an interesting image and move on. That did not happen. While writing the prompt, I deliberately avoided gender markers and detailed physical descriptions because I wanted the image to express an emotional state rather than a realistic likeness. Even with that choice, the system produced a portrait with a face that read as male. At that point, the tool stopped feeling neutral. It felt like it was interpreting my ambiguity and turning it into a socially recognizable identity.
That shift became the real subject of this project. What I ended up analyzing was not just one AI-generated portrait, but the way text-to-image generation turns selfhood into a negotiation between what I intend, what the platform allows, and what the model fills in for me. In that sense, the selfie is less a mirror of who I am and more a site where identity is assembled through human intention, platform affordances, and algorithmic patterning.
How I Made the Image
I approached the image through iterative prompting rather than trying to make it look like a realistic version of me. I kept the same general genre—a painterly portrait—but changed how much identity information I provided each time. My first attempt focused on emotional ambiguity and introspection, with no gender markers or detailed facial description. When that image came back with a face that read as male, that result became the central tension of the project.
I then revised the prompt several times to test ambiguity more directly, including whether the system could maintain a less fixed identity when I asked for androgyny, cultural openness, or a refusal of traditional gender norms. In the end, I selected the version that made the system’s interpretive habits most visible.
Prompting Process
Attempt 1: abstract mood portrait focused on emotional ambiguity and introspection
Attempt 2: explicitly androgynous features with the same painterly style
Attempt 3: a more culturally open portrait without a specific cultural look
Attempt 4: a figure that did not follow traditional gender norms
Attempt 5: the version selected for analysis because it made the system’s interpretive habits most visible
What the Image Showed Me
In an AI selfie, meaning starts forming before the image even exists. The prompt is not just a command. It is my way of telling the system what kind of image I want and what kind of feeling it should carry. By focusing on an emotional state instead of realistic likeness, I was trying to move the selfie away from a face-as-proof genre and toward something more expressive.
What the system gave back suggests that portraiture comes with built-in expectations. Even when I avoided identity labels, the output still drifted toward familiar gender cues. That changed how I understood authorship. The image no longer felt like pure self-expression. It felt more like evidence of how I was being interpreted. In that sense, the selfie is not simply a mirror of who I am. It is a place where a version of self gets assembled through my prompt and the model’s learned habits.
Why Context Matters
In this project, context matters almost as much as the image itself. My AI selfie first existed as a class artifact submitted on eClass, where the audience was limited and the purpose was mainly academic. In that setting, the image worked as evidence for an argument about how AI systems fill in identity. Once I place the same image in a blog, though, the context changes. It becomes easier to share, easier to link, and easier to encounter without the background of the assignment. That shift matters because it changes how the image can be read and how easily it can be misunderstood.
Publishing also changes the social life of the image. A blog is not just a neutral container. Once the portrait appears there, it sits within a more public digital environment shaped by visibility, circulation, and audience interpretation. That means the image is no longer only a classroom example. It becomes part of a wider visual culture in which identity cues can carry different meanings and different risks.
Transliteracy Reflection
Why the Blog Format Changes the Message
Why This Image Raises Ethical Questions
The ethical issue raised by this image is not just that it does not feel fully accurate. The bigger issue is what the system does when I try to leave identity open. Instead of staying neutral, it repeatedly produced a face that leaned male. In my case, leaving out gender did not remove gender from the image. It invited the system to supply it. That is what made the portrait useful to analyze. It made visible the way AI can turn ambiguity into something more fixed and socially readable.
There is also an ethical question in how I present the image once it has already been pushed toward a gendered reading. I do not think the responsible move is to treat it as a transparent statement of my identity. A better approach is to present it as evidence of how the system handles ambiguity. Framing it that way matters because it keeps the focus on representation, interpretation, and platform logic, rather than asking viewers to treat the portrait as a simple reflection of who I am.
Comments
Post a Comment