As we look around today, we have plenty of examples of both mis and disinformation. Social media, reddit threads, and online groups are taking it upon themselves to call out the misrepresentations replete across our feeds.
Making our way through the Module readings, I am also reading "Storytelling and/as Misinformation". (You can find the article via the U Alberta library: 1. McDowell K, Sanfilippo MR, Ocepek MG. Storytelling and/as Misinformation: Storytelling Dynamics and Narrative Structures for Three Cases of COVID-19 Viral Misinformation. In: Governing Misinformation in Everyday Knowledge Commons. Cambridge Studies on Governing Knowledge Commons. Cambridge University Press; 2025:18-40.).
Understanding misinformation and disinformation requires attending to narrative structures and relationships. In the storytelling triangle, the audience's relationship to the teller hinges in part on how they understand the teller's own relationship to the story as well as which story the teller chooses to tell that audience. This framework proves illuminating when analyzing information disorders in digital spaces.
When misinformation circulates, the teller often believes the story they share, maintaining what appears as an authentic relationship to the narrative. The audience may trust the teller based on personal connections or shared community membership, even when the story itself contains fabrications. With disinformation, however, the teller's relationship to the story is fundamentally dishonest. They select particular narratives strategically, tailoring stories to specific audiences to achieve predetermined effects. The teller knows the story is false yet presents it as truth, exploiting the audience's trust.
This triangular relationship becomes further complicated in algorithmic environments where the "teller" might be a bot, a coordinated network, or an AI system. The audience may not recognize that the apparent teller has no genuine relationship to the story whatsoever. Moreover, algorithms determine which stories reach which audiences based on predicted engagement rather than narrative integrity or epistemic value.
Lim et al. (2024) translate theoretical frameworks into empirical analysis by examining how social media architectures function as amplification mechanisms for both misinformation and disinformation. Their research demonstrates that platform affordances such as shareability, virality metrics, and algorithmic prioritization create conditions where false information can achieve greater velocity than verified content. Each act of sharing carries epistemological weight, amplifying messages regardless of their relationship to truth.
What proves particularly problematic is how platform design prioritizes engagement over accuracy. Content generating intense affective responses receives preferential algorithmic treatment, creating feedback loops where users encounter information reinforcing existing worldviews. This produces what scholars term filter bubbles or echo chambers, environments where repeated exposure to ideologically consistent content increases susceptibility to aligned misinformation.
When we share information on social media, we become tellers within the storytelling triangle, positioning ourselves in relationship to both the story and our audience. Our credibility as tellers depends partly on our demonstrated care in selecting which stories to amplify and our transparency about our own relationship to those narratives. Do we share information because we have verified it, or because it confirms our worldview? Do we acknowledge uncertainty, or present speculation as fact?
Something to think about as we progress through the readings: platform architectures privilege engagement rather than veracity. Users bear responsibility for critical evaluation prior to amplification. Consider your position as a teller within information networks and what your choices signal about your relationship to truth.
With that in mind, look at the CRA information. It exemplifies credible, verifiable, transparent information...or does it? Government statistical agencies, peer-reviewed scholarship, and established fact-checking institutions model information practices grounded in methodological transparency, source citation, and acknowledgment of epistemic limitations. These sources usually demonstrate a clear relationship between teller and story: methodological rigour and institutional accountability.
So, to go back to Shin's question: "how do we know what we know?" In digital environments, answers emerge not through passive reception but through active, critical, ethically grounded information practices. The challenge we have today, involves developing not only those technical skills but a curiousity; a critical-thinking mindset to seek out the facts and accountability.

What stands out most is that credibility isn’t just about the content itself, but about the relationship between the teller, the story, and the audience. That framework helps explain why misinformation can feel so convincing—because trust is often built socially, not factually.
ReplyDeleteI also think algorithmic environments adds an important layer. Platforms like Reddit or Instagram complicate the triangle by obscuring who/what/the “teller” actually is. When content is boosted based on engagement rather than accuracy, it shifts authority away from expertise and toward visibility. That makes it much harder for audiences to assess the authenticity of a story, especially when bots or coordinated networks are involved. The mention of credible sources like government agencies (such as the CRA) is interesting too, because it reminds us that even institutions we generally trust still require critical engagement. Transparency and accountability help establish credibility, but they don’t eliminate the need for scrutiny.
Before reading course materials on this topic, I had no idea there was actually a distinction between misinformation and disinformation. I used these terms interchangeably, assuming they both just meant 'false information.' It turns out the difference is significant: misinformation spreads without intent to deceive, while disinformation is deliberately crafted to mislead. As someone working in the humanities, this distinction has made me rethink how I approach sources in my own research.I’ve started using AI in my daily work, much like many other researchers. I usually use ChatGPT to summarize long articles, put together bibliographies, or track down sources. It’s definitely fast, and sometimes it really surprises me. However, there is a real catch. AI doesn't actually look up information. Instead, it builds sentences that sound confident even when the information is totally made up. These "hallucinations" aren't just random glitches. They are built into how the models work because the system prioritizes patterns over actual facts. The part that worries me most is that AI mistakes don't look like typical human errors. If a human messes up a citation, they might get a page number wrong. But when ChatGPT fakes a source, it invents the whole thing. It can create a title, an author name, and a journal that sounds perfectly real but have never existed. For those of us in the humanities who live and die by our primary sources, this is a huge problem. It’s easy to accidentally build a whole argument on a foundation of "ghost" sources.To handle this, I’ve stuck to one rule: AI is a starting point, never the final word. If ChatGPT gives me a reference, I don't cite it until I've seen it with my own eyes. If it gives me a summary, I go back and read the original text. This takes more time, but I have to do it to keep my own work honest. It’s a bit ironic that we use these tools to speed things up, only to find we have to slow down to double-check everything they say. But that extra step is what makes research reliable. AI is a decent assistant, but it lacks the healthy skepticism and judgment that a real scholar needs. As these tools become more common, our job as fact-checkers is actually more important than ever. For those navigating AI tools in academic research: https://lib.guides.umd.edu/c.php?g=1340355&p=9880575
ReplyDeleteOne of the most striking ideas across this week’s readings is how misinformation is not simply a technological problem, but a deeply human one. Shin’s discussion of the “misinformation paradox” highlights a troubling contradiction: even when people recognize misinformation as harmful, they still engage with and share it online. This challenges the assumption that awareness alone can solve the issue and instead suggests that transliteracy must involve not just understanding media, but critically reflecting on our own behaviours within it. Lim et al. further complicate this by distinguishing between misinformation (unintentional) and disinformation (intentional), emphasizing how both are amplified by social media structures that prioritize speed and engagement over accuracy. In a transliteracy context, this means users must navigate not only different media formats, but also different intentions behind information. The ability to interpret across media, then, becomes inseparable from evaluating credibility, motive, and emotional appeal.
ReplyDeleteWhat stood out to me most is how widespread concern about misinformation does not necessarily translate into effective action. For example, Statistics Canada reports that while 59% of Canadians are highly concerned about misinformation, many still struggle to distinguish truth from falsehood online. This gap suggests that transliteracy education needs to move beyond awareness toward practical, habitual skills — like verification and critical questioning. Overall, these readings raise an important question: if misinformation thrives not just because of technology but because of human cognition and behaviour, how can transliteracy education be designed to meaningfully change how we engage with information, rather than just how we understand it?