AI as a Mirror: Reflecting on our self-identity through AI

 AI as a Mirror: Reflecting on our self-identity through AI

Selfies have long been a tool for shaping and curating our digital identities. But what happens when AI steps in? Can AI do more than mimic looks, and interpret who we are? This blog follows my journey of creating an AI-generated selfie, reflecting on how technology can both reflect and distort our identities. 

Crafting the image of “me”

Figure 1.
It all started with a simple idea. Could I create a selfie that felt like me? Not just in appearance, but in personality?
I documented my creative journey in the video below, which shows the steps and tools I explored while generating my AI selfie. What stood out most was how AI can struggle with nuance. Some results felt overly simplistic, others leaned into stereotypes. However, when I shifted my focus from physical accuracy to capturing my personality, the image improved. The final version (Figure 1) felt warm and open, qualities that truly resonate with me. It even included subtle physical traits that made it feel more authentic and personal.

This process got me thinking about how we define identity online. Maybe it’s not just about how we look, but how we express and feel about who we are.

Song: California from Run Devil Run. 

Does AI do a better job of representing you than you? 

The final AI-generated image (Figure 1) was surprisingly close to how I see myself. The girl in the picture had a similar haircut, wore an orange sweater, and carried a backpack. These are items I actually own but hadn’t mentioned in the prompt. That detail intrigued me. It felt like the AI went beyond direct descriptions and interpreted personality traits in a meaningful way.

Figure 2.

In contrast, my real selfie (Figure 2) looked quite different. It was more serious and reserved, while the AI version projected warmth and expression. The AI-generated image seemed to resonate more with how I perceive myself. The ability to provide details in the prompt allowed the AI to incorporate elements of my personality. This is often difficult to convey in traditional selfies. This moment echoed the research by Moga and Rughiniş (2023), who suggest that AI-generated images can reveal aspects of identity that are tough to express through conventional photos.  

AI-generated images offer a unique perspective on identity—not just how we look, but how we’re interpreted by others and by algorithms. If an AI version of me feels more “me” than my real selfies, what does that say about how we perform identity in the digital world? To explore the psychology behind AI-generated images and how they can change our understanding of self-representation, check out this article. 

AI's Identity Crisis: the problem of AI reducing identities into stereotypes

Figure 3.
While there were moments of surprising insight, my journey also highlighted some limitations of AI. When I removed physical traits from my prompt and focused on personal values and my Dutch nationality, the AI defaulted to stereotypical visuals like Dutch flowers and architecture (Figures 3 and 4). It felt like the algorithm had reduced me to a postcard. This problem goes beyond aesthetics. As scholars like Miltner (2024), Mohamed et al. (2024), and Ho (2023) explain, AI systems often reinforce cultural and social stereotypes, and there are often cultural or social biases detected. These patterns emerge from biased datasets (Nicoletti & Bass, 2023). Some groups are overrepresented, while others are simplified or ignored.

Figure 4.
Misrepresentation in AI has real, tangible consequences. It can make people feel invisible or reduced to stereotypes, reinforcing societal biases. Research has shown that certain cultures are often oversimplified or reduced to clichés. For example, Gautam and Ghosh (2024) found that AI tools frequently depict Indian culture with narrow, symbolic visuals that overlook its regional diversity. When these patterns go unchecked, they shape how individuals are perceived and how they perceive themselves, gradually normalizing harmful stereotypes. If AI continues to misrepresent marginalized groups, it could deepen digital exclusion. Addressing this issue is crucial, and Figure 5 outlines some essential steps to begin making change.


Figure 5.

Note. From SG Analytics [Infographic], 2022 (https://www.sganalytics.com/blog/bias-in-artificial-intelligence-is-diversity-the-key-to-the-future-of-ai/)

From Course to Conversation: Translating Academic Jargon into Reader-Friendly Content

When I began turning my academic critique into a blog post, I quickly realized I had to rethink how I wanted to communicate. Academic writing is often dense and fact-focused. A blog, on the other hand, invites the reader into a conversation. It’s not just about sharing ideas, but about making them resonate with a broader audience.

This shift in format reminded me of Manovich’s (2001) ideas on new media, where content lives in a global, participatory space. Blogging meant stepping into that space. It also connected with the concept of transliteracy, which Thomas (2013) describes as the ability to navigate across platforms and engage diverse audiences. That meant adjusting not only the tone and language but also the structure and feel of the content.

To do this, I broke down complex theories into accessible ideas, cut out jargon, and added visuals that might improve understanding. But more importantly, I made it personal. Including my own experiences helped the post feel more relatable, opening a space for connection. This ties into the idea of the gift economy, where shared knowledge (Stevenson, 2018) creates community rather than just delivering content. Embracing transliteracy helped me move from academic texts into a more open, conversational space where critical reflection becomes a shared experience.

References

Gautam, S., & Ghosh, S. (2024, October 7). Non-Western cultures misrepresented, harmed by

Generative AI, researchers say. Penn State College of Information Sciences and Technology. https://ist.psu.edu/about/news/venkit-aies-social-harms

Ho, S. C. Y. (2023). From Development to Dissemination: Social and Ethical Issues with

Text-to-Image AI-Generated Art. Proceedings of the Canadian Conference on

Artificial Intelligence. https://doi.org/10.21428/594757db.acad9d77

Manovich, L. (2001). What is new media? In: The language of new media. MIT Press.

https://dss-edit.com/plu/Manovich-Lev_The_Language_of_the_New_Media.pdf

Miltner, K. (2024). Lensa and the discourse of visual generative AI. University of Sheffield.

https://doi.org/10.33621/jdsr.v6i440456

Moga, D. A., & Rughiniş, C. (2023). Idealized self-presentation through AI avatars. A case

study of Lensa AI. In 2023 24th International Conference on Control Systems and

Computer Science (CSCS) (pp. 426-430). IEEE. https://www.researchgate.net/profile/Cosima-Rughinis/publication/371006582_Idealized_Self-Presentation_through_AI_Avatars_A_Case_Study_of_Lensa_AI/links/646e830f37d6625c002e469f/Idealized-Self-Presentation-through-Al-Avatars-A-Case-Study-of-Lensa-Al.pdf

Mohamed, Y.A., Mohamed, A.H., Kannan, A., Bashir, M., Adiel, M.A., & Elsadig, M.A.

(2024). Navigating the Ethical Terrain of AI-Generated Text Tools: A Review. IEEE

Access, 12, 197061-197120.

Nicoletti, L. & Bass, D. (June 9, 2023). Humans Are Biased. Generative AI Is Even Worse.

Bloomberg. https://www.bloomberg.com/graphics/2023-generative-ai-bias/

Stevenson, M. (2018), "From Hypertext to Hype and Back Again: Exploring the Roots of

Social Media in the Early Web.” Retrieved from https://hcommons.org/deposits/item/hc:16611/ 

Thomas, S. (2013, March 15). TRANSLITERATE SPACES [PowerPoint slides]. eClass.

https://eclass.srv.ualberta.ca/mod/resource/view.php?id=8288118

 





Comments