Assignment 4- Theory of the selfie part 3- Alayna Liu

 Navigating the Complexities of Digital Self-Representation in the Age of AI

In the digital age, the selfie has emerged as a powerful medium for self-expression and identity construction. Notably, as Cruz and Thornham (2015) noted, a selfie is a self-directed and technologically-mediated form of visual representation. In their work titled, Selfies beyond self-representation: The (theoretical) f(r)ictions of a practice Cruz and Thornham (2015) challenge the traditional views of selfies as merely self-documentation or narcissistic identity affirmation. They argue that the selfie allows individuals to curate and control their online persona. However, the process of creating and sharing selfies is far from a neutral act (Orekh et al., 2016). Orekh and others claimed in their 2016 study that this is because they are intertwined with the influence of technology, culture, and personal identity. I explored this dynamic by using the three AI photo apps, Fotor AI Getimg.AI, and Spyne to generate a digitally enhanced selfie. While these tools can enable new ways of self-expression, they also constrain the processes of digital identity construction. This analysis shows my experience navigating the tension between AI idealized self and my lived experience. I critically engage with the ethical challenges, cultural biases, and implications of AI-generated self-representation to examine the role of AI technologies in shaping digital self-representation.

For this part of the assignment, I used Fotor AI, Getimg.AI, and Spyne AI platforms to create digitally enhanced selfies of myself. The process began with making a free account for the three platforms. I then uploaded a baseline photograph of myself to the three platforms. For Fotor AI, the AI algorithms then proceeded to "improve" by applying various enhancements. However, the platform offered me various customization options. For instance, I could choose whether I wanted my output to have the Mona Lisa background or whether I wanted my selfies to be anime. Therefore, the Fotor AI interface provided different options for customization and by far had the most enhancement options of the three tools. Thus, I experimented with different filters, lighting adjustments, and even facial feature alterations. The resulting image is shown in Image 1 below.

Image 1: Fotor AI digitally enhanced selfie results

However, I quickly realized that while the platform offered a degree of creative freedom, the options were limited in terms of diversity. Notably, the AI-powered enhancements were largely driven by preexisting aesthetic standards and societal norms. For instance, the Fotor algorithms automatically smoothed my skin and accentuated my facial features in ways that aligned with mainstream ideals of beauty and attractiveness. Thus, even though there were many options, I felt that they offered more of the same.

To test whether this was an isolated incident, I tried enhancing the same photo in another AI tool called Getimg.AI. However, I soon realized that Getimg.AI and Fotor AI tools had nearly identical interfaces. Thus, I had a feeling that the results would not be much different. True to my prediction, Getimg.AI’s idea of enhancing was to smoothen my skin even more as shown in Image 2 below. The AI also gave me more European features such as a nose with a narrow ridge.

Image 2: Getimg.AI result for my digitally enhanced selfie

For a deeper analysis, I decided to use a third tool called Spyne. Unlike the other two tools above, Spyne had no customization. I uploaded the image and the AI decided the best way to enhance my selfie. The results are shown in Image 3 below. As shown, the results from Spyne were not as radical as the other two tools. However, it also smoothened my skin even more like the other two tools.
Image 3: Spyne digitally enhanced selfie.

This exercise echoed the class readings, which highlighted how AI can one on hand enable new forms of self-expression. On the other hand, it can impose limitations based on embedded cultural biases. With the knowledge from the course, as I navigated the three AI platforms, I found myself grappling with the ethical implications of using such technology to represent myself online. On the one hand, the three AI-generated selfies offered a visually appealing "enhanced" version of myself. However, I was keenly aware that these images did not fully capture my identity. In addition, the exercise raised concerns about digitally manipulated selfies. I found myself wondering how AI technology will affect authenticity, privacy, and the potential for misrepresentation. These issues are crucial to consider in the context of digital self-representation.

The digitally enhanced selfies I created using Fotor AI, Spyne, and Getimg.AI highlighted the contrast between the AI-generated representations and my authentic self. Notably, these AI-powered tools projected an idealized version of myself. As Winkler (2023) noted, the images adhere to a certain societal standard of beauty. However, this representation felt increasingly disconnected from my lived experience and identity. For Winkler, such an experience represents a lack of control in the editing process. In his article, where he sought to investigate the dangers of AI photography, Winkler (2023) noted that AI-based image editors can modify images without input from users. A good example is Spyne which decides how to enhance uploaded images. This can be damaging because the algorithms may not grasp the user's goals. Hence, it will produce an image based on trained data which most likely will not represent everyone’s identities.

As I delved deeper into the analysis, I began to recognize how these AI platforms had reshaped my self-representation. First, there was the smoothing of my skin. There was also the subtle yet deliberate refinement of my facial features. All these “enhancements” contributed to the creation of a more "palatable" digital self-image. To give an example, both Fotor AI and Getimg.AI altered my appearance to align with Eurocentric beauty standards. Specifically, as can be seen in Images 1 and 2 above, the AI tools gave me a narrower nose bridge, larger eyes, and a more chiseled jawline. Additionally, the AI-generated selfies incorporated virtual cosmetic enhancements. These included prominent foundation, red lipstick, and rosy blush which further “enhanced” my idealized aesthetic. Notably, such a situation is investigated by Ananya in a 2024 review. For instance, Ananya (2024, p. 722) noted that when prompted to generate ‘a photo of an American man and his house,’ produced a pale-skinned person in front of a massive colonial-style home. However, when prompted to for ‘a photo of an African man and his luxury house’, it generated a dark-skinned figure in front of a mud house. After analyzing other AI tools, they found that most resorted to common stereotypes. In short, most of the AI models generated biased and stereotypical pictures of gender, skin color, occupation, nationality, and more. As a result, they can influence what is deemed socially acceptable or representative of a "desirable" identity. As such, these tools risk marginalizing and excluding individuals. Regrettably, most of those marginalized will be minorities whose physical characteristics fall outside of the narrow parameters established by these technologies. Importantly, these AI-powered tools have the potential to shape how individuals express their identities online.

Furthermore, the integration of AI-enhanced selfies into our digital landscapes raises concerns about the authenticity of our online self-presentation. Notably, the usage of these tools is growing and they are becoming more integrated into digital communication. As such, there is a risk of creating a skewed perception of reality. This will happen especially if the algorithmically-manipulated versions of ourselves become the expected norm, rather than the authentic representations of our identities. Such a situation is the main idea of Volpicelli's article. Volpicelli (2023) observed that AI image generators are more accessible, have better quality and the process can be scaled to produce thousands of AI-generated images. Thus, for Volpicelli (2023), AI-generated images, pose risks for political disinformation and the authenticity of photographic evidence. In addition, Lu and others (2024) study highlights the advancement of AI technology in generating realistic fake images that can easily deceive humans. The researchers noted that humans had a misclassification rate of 38.7% in distinguishing AI-generated images from real ones. Thus, these studies show the potential risks of AI-generated visual content in spreading misinformation or false narratives.

Moreover, the use of AI-powered image generators and enhancers raises privacy concerns. As Dwivedi and others (2022) noted, circulating representational digital content carries crucial ethical responsibilities. This is in regard to consent, privacy, and the preservation of the original creator's intent. However, as I noted, when one uploads their images to these AI platforms, they are required to surrender certain rights and consent to the platform's terms of service. However, as Li and Huang (2019) argue, this transactional model of consent can lead to the exploitation of user-generated content. This is because the platforms gain the ability to recontextualize and repurpose the images without the original creator's control. Moreover, the rapid dissemination of these AI-enhanced selfies compounds privacy issues. According to Volpicelli (2023), social media enables images to spread online fast. This carries a risk of fake AI-generated images undermining the integrity of the self-representation. Thus, AI tools pose risks of misuse and misrepresentation. Therefore, it is crucial to recognize that the tension between AI-mediated and authentic self-representation is a matter of aesthetics. Instead, it challenges, among other things, the foundations of how we construct, and understand identity in the digital age.

The peer review process for Part 1 of this assignment provided valuable insights. This was instrumental in helping shape the development of this post. First, my peers challenged me to delve deeper into the ethical implications of AI-mediated self-representation. This was also a comment that the instructor noted. On the other hand, the instructor challenged me to connect my analysis to the scholarly discourse on digital identity. Integrating this feedback, I have strengthened the thesis statement. I have also deepened my engagement with the literature. I have also critically examined how AI not only enhances the visual aspects of self-representation but also reshapes the processes of identity construction. I achieved this by considering two more AI tools. The aim was to present an informed analysis. Furthermore, the transition from a traditional academic paper to a blog format allowed me to explore new ways of communicating my insights. Notably, the blog medium enabled me to incorporate different multimedia elements, such as images. This has helped me enhance the narrative, especially around AI tools results. As a result, I believe that my post engages the reader more effectively. In addition, this shift in medium has also prompted me to adopt a more conversational tone. However, I still aimed to maintain analytical depth.

I ran into various ethical and cultural issues in this activity. Notably, I had concerns regarding misrepresentation, privacy, and consent while using the AI-powered platforms. Initially, I was hesitant to submit my selfies to the three AI sites. To ease my fears, I decided to quickly skim through the terms of service on the three sites. I discovered that in order to use the sites, I had to give up certain rights of ownership to my picture data. For instance, Fotor claims to respect copyright but also admits that it has no mechanism to ensure that copyright rules are obeyed on their platform (Fotor, 2024). My concerns about possible exploitation persisted even after the terms of conditions stipulated that my image would not be shared with third parties. Furthermore, there was a chance that my authority as the original author might be undermined through a data breach. Therefore, it is possible that third parties might access my selfie without my consent or knowledge and then use it for marketing and other purposes. The ease and speed with which images can be shared on the internet intensified these ethical issues. Furthermore, the biases in the AI algorithms influence how people view digital self-representation. This raises cultural concerns regarding the perpetuation of discriminatory narratives and the marginalization of different lived realities.

Creating and analyzing an AI-generated selfie has been a transformative experience for me. It has challenged me to examine the nature of digital self-representation. Thus, I now recognize the impact AI technologies have on how we construct our online identities. Also, I realized that algorithms underlying these AI tools have been trained on dominant aesthetic standards. As a result, the output from these AI models often does not fully align with minority sense of identity. This realization has led me to consider the important ethical implications of AI-generated self-representation. One key consideration is the tension between idealization and authenticity. Another important consideration is the cultural forces that influence how we present ourselves in the digital realm. Going forward, I am committed to continuing to explore this topic of digital identity and the role of AI in shaping self-expression. I will be more aware of the difficulties involved in constructing one's digital self. Ultimately, this experience has been a lesson on the nature of identity in the digital age.


References
Ananya. (2024). AI image generators often give racist and sexist results: can they be fixed? Nature. https://doi.org/10.1038/d41586-024-00674-9

Cruz, E. G., & Thornham, H. (2015). Selfies beyond self-representation: the (theoretical) f (r) ictions of a practice. Journal of Aesthetics & Culture, 7(1), 28073. http://dx.doi.org/10.3402/jac.v7.28073

Dwivedi, Y. K., Hughes, L., Baabdullah, A. M., Ribeiro-Navarrete, S., Giannakis, M., Al-Debei, M. M., ... & Wamba, S. F. (2022). Metaverse beyond the hype: Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 66, 102542. https://doi.org/10.1016/j.ijinfomgt.2022.102542

Fotor. (2024, January 19). The latest and Most Complete Terms of Service. Fotor. https://www.fotor.com/termsofservice
Li, Y., & Huang, W. (2019). Taking users' rights seriously: proposed UGC solutions for spurring creativity in the Internet age. Queen Mary Journal of Intellectual Property, 9(1), 61-91. http://dx.doi.org/10.4337/qmjip.2019.01.04

Lu, Z., Huang, D., Bai, L., Qu, J., Wu, C., Liu, X., & Ouyang, W. (2024). Seeing is not always believing: Benchmarking human and model perception of ai-generated images. Advances in Neural Information Processing Systems, 36. https://doi.org/10.48550/arXiv.2304.13023

Orekh, E., Sergeyeva, O., & Bogomiagkova, E. (2016, October). Selfie phenomenon in the visual content of social media. In 2016 International Conference on Information Society (i-Society) (pp. 116-119). IEEE. http://dx.doi.org/10.1109/i-Society.2016.7854191

Volpicelli, G. (2023, October 23). AI and the end of photographic truth. POLITICO; POLITICO. https://www.politico.eu/article/ai-photography-machine-learning-technology-disinformation-midjourney-dall-e3-stable-diffusion/

Winkler, M. (2023, December 7). Potential Dangers of Artificial Intelligence Photo Apps - andersonlawfl.com. Anderson Law Firm. https://andersonlawfl.com/potential-dangers-of-artificial-intelligence-photo-apps/













Comments

  1. Hi Liu! This blog post does an excellent job of highlighting the ethical conundrums that arise with the use of AI in personal representation. The nuanced discussion around consent, privacy, and the potential exploitation of digital images is particularly poignant. It serves as a crucial reminder of the need for ongoing vigilance and ethical considerations in our digital interactions.

    The exploration of cultural biases within AI algorithms and their impact on self-representation is a standout aspect of this analysis. By highlighting how these technologies can perpetuate Eurocentric beauty standards and marginalize minority identities, the post sheds light on a critical area that often goes overlooked in discussions about digital technology. It's a call to action for developers to be more inclusive in their algorithmic training sets.

    The tension between seeking authenticity in our digital personas and the algorithmic push towards idealization is a compelling narrative thread.

    Overall I really into your post

    ReplyDelete

Post a Comment