I stared with Open Art which uses DALL-E, allowing for generated
images to come from specific, plain language prompts. DALL-E’s evolution means
it can interpret context and nuance, allowing for an accurate representation.
It felt like if you input the right prompts, you should get
a representation of yourself - not the case in my experience which was so
fascinating.
My initial prompt was: 50-year-old professional woman, medium
blonde hair, brown eyes, dimple on chin, average build, straight teeth, wide
smile. Light skin tone, busy mom of two teenagers, married. Passion for
running, working out, hiking, being outdoors brings joy.
I noticed my prompts described a third person, using “she”
versus “me or I”.
When the image was generated, I immediately knew it didn’t
look like me but intrigued I could imagine someone else seeing a resemblance. The
scene created with the image also felt authentic to my personality and matched
an accurate “energy” in how I would have felt in similar past lived experiences.
I then tried ChatGPT Images and refined the prompts detailing
the location of my dimple, change in skin tone and skin condition (less
wrinkles) and the result felt more accurate. I started to reflect further to
consider how I could adjust prompts to get closer to my likeness and wasn’t
clear about what I’d input.
The readings talks about traditional media and its evolution.
You can appreciate the endless data points. It feels impossible to prompt and get
an image of myself.

Comments
Post a Comment