I asked an AI to generate this selfie, and I started by describing myself the way I'd describe my everyday life: a bilingual university student living in Edmonton in winter. That detail mattered more than I expected. Without it, the first images looked like a generic "pretty" selfie. Once I included bilingual cues, the output started to feel closer to my actual routine-switching between English and Chinese, and constantly moving between real life and platform life.
The final image matches what my commute feels like in winter. When I'm taking the LRT (Light Rail Transit) or waiting for the bus, I'm usually bundled up with headphones on, and I'm almost always on my phone-checking delays, searching directions, or getting pulled into "Recommended for you" while I'm standing there. That's why I built the interface into the prompt: the Search / 搜索 bar, likes, notifications, and the recommendation panel floating beside me like it's part of the street.
The most uncomfortable (but also the most honest) part is the data layer: "Location ON / 定位已开启," "Data shared / 数据共享," "Ad targeting: student," and "language: EN / 中文." Seeing those labels next to my face makes it obvious how quickly a person turns into categories. The "visa status: newcomer" line hits especially hard. I'd never choose to put that on a selfie, but it captures how platforms infer, sort, and then use those guesses to target you.
Does it match how I see myself? Partly. The winter-student vibe feels real. But the overlay shows the version of me that's easiest to measure and monetize, which is exactly what making this image made me notice.

Comments
Post a Comment