by Nino Bregadze, Ilia State University, Georgia

This article is the result of a joint initiative bringing together the Media & Learning Association and the Georgian National Communications Commission (ComCom) to promote and provide a platform for young researchers in digital and media literacy. This paper is by the Winner in 2nd place in the student analytical paper competition “Become a Media Literacy Researcher” organised by ComCom. This article represents a condensed version of the full-length academic paper. The complete research, including the full bibliography, is available here.
The contemporary media environment operates at a relentless pace. Information is produced, circulated and consumed in real time, leaving little room for reflection. For many users, daily life now involves constant exposure to dense streams of content, often layered on top of mental fatigue and emotional burnout. The scale of this information overload is staggering. According to the DOMO 2024 report, every single minute sees more than 16,000 videos uploaded to TikTok, while nearly 139 million reels are watched across Instagram and Facebook combined.
In such an environment, social media users are not merely informed – they are overwhelmed. News, opinions and interpretations arrive continuously, demanding attention while simultaneously eroding the cognitive resources required for critical evaluation. Processing information through an analytical lens becomes increasingly difficult and this creates “ideal” conditions for the spread of disinformation. False information spreads six times faster on social media than verified facts, underscoring how speed and emotion often outperform accuracy in digital spaces.
Although misinformation has existed throughout human history, the sheer volume and intensity of today’s content ecosystem present unprecedented challenges. Scroll-based platforms are powered by sophisticated algorithms designed to maximise engagement, not understanding. These systems selectively filter information deemed most appealing or believable to individual users, gradually enclosing them within personalised information bubbles. As a result, users are less inclined to seek objective truth and more likely to accept narratives that align with their existing beliefs, political preferences or social identities. Truth becomes fragmented, personalised and – at times – entirely reconstructed.
Within this context, this article examines two central questions: what linguistic and discursive strategies are employed to spread disinformation in scrollable media and how does the scrolling culture itself weaken critical thinking while accelerating the circulation of false narratives?
To address these questions, the study applies Critical Discourse Analysis (CDA) to two articles published on the Georgian platform Factcheck.ge under the “Climate Change” section. Language, as a primary vehicle of persuasion and manipulation, plays a central role in shaping perception. CDA offers a well-established framework for examining how discourse constructs, maintains and legitimizes power relations within texts.
More specifically, the analysis draws on Van Dijk’s Ideological Square, a meta-strategy that explains how discourse emphasises positive attributes of the in-group while highlighting negative characteristics of out-groups, such as perceived opponents or enemies. This framework allows for a systematic examination of how ideological boundaries are constructed and reinforced through language.
The findings reveal a consistent pattern across both articles/texts. Apocalyptic imagery, emotionally charged expressions and strongly negative framing dominated the discourse. Rather than aiming to inform or educate readers, the texts sought to provoke fear, anger, anxiety and hostility. The authors positioned “Group Members” as unquestionably credible while leaving no space to challenge their authority or verify the accuracy of the information presented. In doing so, the narratives actively polarised audiences and dismissed objective reality.
In scroll-driven media environments, emotionally saturated texts are often processed intuitively rather than critically. Users tend to react before they reflect, increasing the likelihood that disinformation will be accepted, shared and amplified. As a result, linguistic manipulation becomes a powerful catalyst in the viral circulation of false information. This dynamic directly shifts the responsibility of resistance from platforms alone to the individuals who navigate them.
Media literacy, therefore, cannot be viewed solely as an institutional obligation; it is equally an individual one. Maintaining strong “Digital Hygiene,” cultivating critical thinking skills and committing to continuous self-education serve as essential defenses within an algorithmically driven media landscape. As scrolling culture continues to dominate information consumption, resisting manipulation requires sustained and conscious effort. Without such engagement, disinformation risks becoming not the exception, but a defining feature of the digital public sphere.



