This case highlights the distinction between misinformation and disinformation. If the influencer unknowingly shared incorrect usage, it constitutes misinformation. However, if the content was intentionally misleading to increase product sales, it aligns more closely with disinformation. As Lim et al. (2024) define, fake news consists of “news articles that are intentionally and verifiably false and could mislead readers” (p. 659), indicating that intent plays a key role in distinguishing these categories.
The rapid spread of such content is amplified by platform algorithms that prioritize engagement over accuracy. Social media environments encourage the viral circulation of misleading content, where “falsehoods spread faster than the truth” (Lim et al., 2024, p. 660). Moreover, misinformation functions in ways similar to contagion, as it “is analogous to a virus that can infect people and spread within networks” (Shin, 2024, p. 5), highlighting how easily such content can circulate across digital platforms. TikTok’s recommendation system promotes visually appealing and easily consumable content, creating echo chambers where misleading practices can quickly normalize.
Psychological factors also play a crucial role. Audiences often perceive influencers as authentic and trustworthy, which reduces critical evaluation. Social proof, such as likes and comments, reinforces credibility and encourages users to replicate and share content without verification.
From a broader perspective, this example can also be understood through the lens of Michel Foucault (1978). Influencers do not merely promote products; they shape how individuals manage and discipline their bodies. In this sense, digital media becomes a space where power operates through everyday practices, guiding behavior in ways that may prioritize commercial gain over well-being.
To combat such issues, users must adopt fact-checking strategies, such as verifying product instructions, consulting reliable sources, and questioning sponsored content. Ultimately, improving digital literacy is essential to resisting the spread of misinformation and disinformation in online environments.
Finally, the accompanying image, generated with the assistance of ChatGPT, visually reinforces the main argument of this reflection by showing how misinformation and, in some cases, disinformation on social media can be packaged as attractive, trustworthy, and profitable content. It also highlights the role of digital platforms in amplifying misleading messages while making fact-checking and critical thinking more necessary.
Foucault, M. (1978). The history of sexuality, volume 1: An introduction. Pantheon Books.
Lim, X.-J., Quach, S., Thaichon, P., Cheah, J.-H., & Ting, H. (2024). Fact or fake: Information, misinformation and disinformation via social media. Journal of Strategic Marketing, 32(5), 659-664.
Shin, D. (2024). Artificial misinformation: Exploring human-algorithm interaction online. Springer.
Hi Iffet! I really liked your post because it explained the difference between misinformation and disinformation in a very clear way, and the TikTok example made the topic feel really relevant to how people actually use social media every day. I also thought it was smart to focus on beauty and self-care content, because misleading information online does not always come in the form of political news or obvious fake stories. A lot of the time it appears in content that looks helpful, aesthetic, and trustworthy, which is probably why people are less likely to question it.
ReplyDeleteI also liked your point about algorithms and social proof. On TikTok especially, people often trust content because it looks convincing and comes from someone who seems relatable, not necessarily because the person has any real expertise. I think your post showed that dynamic really well, especially in how engagement can make misleading content seem more credible than it actually is.
I found the connection to Foucault really interesting too, because it added another layer to the post. It made me think about how influencers are not just selling products, but also shaping ideas about how people should treat and manage their bodies online.
One thing that might make the post even stronger is saying a little more about the fact-checking side. For example, what should users actually do when they see this kind of content? Checking product instructions, looking at expert advice, or comparing a few reliable sources might be useful examples to include. Overall, I thought this was a really thoughtful post, and your example worked really well for this topic.
Thank you so much, Jiayi, I really appreciate your thoughtful feedback. I’m glad the TikTok example and the distinction between misinformation and disinformation were clear and relatable.
DeleteYou’re right about fact-checking, that’s a really useful point to build on. I think simple steps like checking official product instructions, looking at a dermatologist or expert advice, and comparing information across reliable sources are really important in these cases.
Thanks again for your insights!
Hi Iffet! I really liked how you used a specific TikTok example in order to distinguish misinformation and disinformation, especially in relation to intent. The point you made that stood out to me most was about how it can be difficult to determine whether an influencer is knowingly misleading their audience because lots of times, it's not all that clear. This connects well to Rubin's discussion of how misinformation exists on a spectrum, where content can shift between accidental and intentional depending on context and interpretation. It makes me wonder whether focusing too heavily on intent might actually limit out ability to respond effectively to harmful content, considering the impact of the content is often more important than the intent. I also found your connections to algorithms and vitality really compelling, specifically the idea that platforms prioritize engagement over accuracy. Your example reflects how misinformation is not just about false content, but also about how knowledge itself is constructed and circulated in digital environments. A TikTok's format (which is short, visually appealing, and easily replicable) shows how users rely more on social cues (likes, shares, influencer credibility) than on verification. Your use of Foucault also adds a strong layer, but I would be more curious to hear more about how users might resist this form of power. For example, do you think digital literacy alone is enough, or should platforms take more responsibility in disrupting these cycles of misinformation?
ReplyDelete