Module 5: Misinformation & Disinformation - Who is Telling the Truth?


As we look around today, we have plenty of examples of both mis and disinformation. Social media, reddit threads, and online groups are taking it upon themselves to call out the misrepresentations replete across our feeds.

Making our way through the Module readings, I am also reading "Storytelling and/as Misinformation". (You can find the article via the U Alberta library: 1. McDowell K, Sanfilippo MR, Ocepek MG. Storytelling and/as Misinformation: Storytelling Dynamics and Narrative Structures for Three Cases of COVID-19 Viral Misinformation. In: Governing Misinformation in Everyday Knowledge Commons. Cambridge Studies on Governing Knowledge Commons. Cambridge University Press; 2025:18-40.).

Understanding misinformation and disinformation requires attending to narrative structures and relationships.  In the storytelling triangle, the audience's relationship to the teller hinges in part on how they understand the teller's own relationship to the story as well as which story the teller chooses to tell that audience. This framework proves illuminating when analyzing information disorders in digital spaces.

When misinformation circulates, the teller often believes the story they share, maintaining what appears as an authentic relationship to the narrative. The audience may trust the teller based on personal connections or shared community membership, even when the story itself contains fabrications. With disinformation, however, the teller's relationship to the story is fundamentally dishonest. They select particular narratives strategically, tailoring stories to specific audiences to achieve predetermined effects. The teller knows the story is false yet presents it as truth, exploiting the audience's trust.

This triangular relationship becomes further complicated in algorithmic environments where the "teller" might be a bot, a coordinated network, or an AI system. The audience may not recognize that the apparent teller has no genuine relationship to the story whatsoever. Moreover, algorithms determine which stories reach which audiences based on predicted engagement rather than narrative integrity or epistemic value.

Lim et al. (2024) translate theoretical frameworks into empirical analysis by examining how social media architectures function as amplification mechanisms for both misinformation and disinformation. Their research demonstrates that platform affordances such as shareability, virality metrics, and algorithmic prioritization create conditions where false information can achieve greater velocity than verified content. Each act of sharing carries epistemological weight, amplifying messages regardless of their relationship to truth.

What proves particularly problematic is how platform design prioritizes engagement over accuracy. Content generating intense affective responses receives preferential algorithmic treatment, creating feedback loops where users encounter information reinforcing existing worldviews. This produces what scholars term filter bubbles or echo chambers, environments where repeated exposure to ideologically consistent content increases susceptibility to aligned misinformation.

When we share information on social media, we become tellers within the storytelling triangle, positioning ourselves in relationship to both the story and our audience. Our credibility as tellers depends partly on our demonstrated care in selecting which stories to amplify and our transparency about our own relationship to those narratives. Do we share information because we have verified it, or because it confirms our worldview? Do we acknowledge uncertainty, or present speculation as fact?

Something to think about as we progress through the readings: platform architectures privilege engagement rather than veracity. Users bear responsibility for critical evaluation prior to amplification. Consider your position as a teller within information networks and what your choices signal about your relationship to truth.

With that in mind, look at the CRA information. It exemplifies credible, verifiable, transparent information...or does it? Government statistical agencies, peer-reviewed scholarship, and established fact-checking institutions model information practices grounded in methodological transparency, source citation, and acknowledgment of epistemic limitations. These sources usually demonstrate a clear relationship between teller and story: methodological rigour and institutional accountability.

So, to go back to Shin's question: "how do we know what we know?" In digital environments, answers emerge not through passive reception but through active, critical, ethically grounded information practices. The challenge we have today, involves developing not only those technical skills but a curiousity; a critical-thinking mindset to seek out the facts and accountability. 

Comments