Shaping Beliefs in the Digital Age: Algorithmic Evolution and the Psychology of Disinformation

by Mariam Latsabidze, Tbilisi State University, Georgia

This article is the result of a joint initiative bringing together the Media & Learning Association and the Georgian National Communications Commission (ComCom) to promote and provide a platform for young researchers in digital and media literacy. This paper is by the Winner in 1st place in the student analytical paper competition “Become a Media Literacy Researcher” organised by ComCom. This article represents a condensed version of the full-length academic paper. The complete research, including the full bibliography, is available here.

Disinformation has become one of the defining challenges of the modern information environment. In the digital age, the ways in which information is created, consumed and shared have undergone a profound transformation. What was once a relatively linear process – production by institutions, dissemination by media and reception by audiences – has evolved into a complex, algorithmically mediated ecosystem. While technological innovation has enabled unprecedented global connectivity, the integration of Artificial Intelligence (AI) into information systems has also reshaped visibility and influence. Algorithms are no longer passive tools of personalisation; they have become powerful drivers of virality, capable of accelerating the spread of disinformation at scale (Vosoughi et al., 2018; Lazer et al., 2018; Metzler & Garcia, 2024).

Technological Shift: From Chronology to “Emotional AI”

The evolution of social media algorithms can be understood as a progression through three distinct stages, each intensifying the ways platforms shape users’ perceptions of reality (Gillespie, 2018; Narayanan, 2023).

In the early chronological era, social platforms presented content in the order it was published. This model was relatively transparent and democratic, granting equal visibility to posts regardless of popularity or engagement. However, it lacked the personalisation and efficiency that users increasingly demanded as platforms scaled.

The second phase, the machine learning era, introduced recommendation systems designed to predict user preferences based on past behaviour. While this shift improved relevance, it also produced unintended consequences. Eli Pariser’s (2011) concept of the “Filter Bubble” captured how users became increasingly insulated from opposing viewpoints, reinforcing existing beliefs and narrowing exposure to diverse information.

The current phase – the “AI and Emotional Era” – marks a more profound transformation. Modern algorithms analyze not only what users click on, but how long they watch, how they interact and even which emojis they use. These emotional signals are aggregated into predictive profiles that allow platforms to deliver content aligned with users’ moods, fears and deeply held beliefs. In this environment, emotional resonance often outweighs factual accuracy, creating fertile ground for disinformation to flourish.

The Engagement Trap and Algorithmic Amplification

At the core of today’s platforms lies an engagement-based ranking model. Content visibility is determined less by its truthfulness than by its capacity to provoke reactions. As Gillespie (2018) argues, platforms have shifted from being neutral conduits of information to active architects of public attention.

This system structurally favours emotionally charged content – particularly outrage, fear and moral indignation – because such emotions reliably generate clicks, shares and comments. The result is a feedback loop in which misinformation is not an accidental byproduct, but an emergent feature of the system itself (Narayanan, 2023; Yin et al., 2025).

Compounding this problem is the use of automated accounts or bots, which are often deployed to simulate popularity and consensus. By artificially inflating engagement metrics, these systems create the illusion that fringe narratives are widely accepted. Algorithms, designed to interpret popularity as relevance, then amplify these narratives further, embedding them into mainstream information flows (Ferrara, 2023; OECD, 2024).

The Psychological Foundation: why we fall for it

Technology alone cannot explain the success of disinformation. Human psychology provides the cognitive and emotional infrastructure upon which algorithmic systems operate.

Kahneman’s (2011) distinction between System 1 and System 2 thinking is particularly instructive. System 1 is fast, automatic and intuitive, guiding most everyday decisions. System 2, by contrast, is slow, deliberate, and cognitively demanding. Because mental effort is experienced as costly, individuals naturally rely on System 1 heuristics when navigating the constant stream of online information.

This reliance gives rise to well-documented biases. Confirmation bias leads individuals to accept information that aligns with their existing beliefs while dismissing contradictory evidence (Nickerson, 1998). The illusory truth effect further amplifies this tendency: repeated exposure to a claim increases its perceived accuracy, regardless of whether it is true (Kahneman, 2011). In algorithmic environments optimised for repetition and visibility, familiarity easily masquerades as truth.

Social Identity and Group Bias

Belief formation in digital spaces is also deeply social. According to Tajfel and Turner’s (1979) social identity theory, individuals derive self-esteem from group membership, which in turn fosters in-group favoritism and out-group hostility. In online environments, information itself becomes a marker of identity. Sharing a specific narrative is often less about the facts and more about reinforcing group unity and distinguishing “us” from “them”.

As a result, disinformation spreads not only because it is persuasive, but because it is socially functional. It reinforces group cohesion, validates shared grievances and strengthens symbolic boundaries between communities.

A Self-Reinforcing Human-Machine Loop

Crucially, algorithms do not invent these psychological tendencies. Rather, they amplify them. Large-scale empirical studies suggest that while algorithms do shape exposure, users’ own choices and social networks play an even greater role in ideological segregation (Bakshy et al., 2015; González-Bailón et al., 2023). However, once a user engages with a particular narrative, algorithmic systems intensify that engagement, reinforcing existing preferences and biases.

This creates a closed feedback loop in which emotional intensity and engagement are rewarded, while factual accuracy becomes secondary. Over time, users are drawn deeper into personalized information environments where alternative perspectives are increasingly absent and misinformation thrives (Milli et al., 2025).

Implications for Higher Education and Media Literacy

Understanding disinformation as a systemic phenomenon has important implications for education and policy. Addressing the problem requires more than fact-checking or content moderation alone. Two complementary strategies are essential.

First, there is a need for greater transparency and regulation of algorithmic recommendation systems. Without insight into how visibility is determined, users remain vulnerable to manipulation embedded within platform design.

Second, media literacy must evolve beyond basic skills of source evaluation. What is urgently needed is digital psychological resilience – the ability to recognise emotional triggers, understand cognitive biases and deliberately slow down System 1 reactions in emotionally charged environments. Higher education institutions are uniquely positioned to cultivate these skills, equipping students not just to consume information critically, but to understand themselves as psychological actors within algorithmic systems.

Conclusion

The contemporary information environment is the product of a powerful interaction between human psychology and machine learning. Algorithmic models have transformed media consumption into a personalised, emotionally optimised experience in which viral success is structurally embedded. Disinformation thrives not because individuals are irrational or technology is inherently malicious, but because both operate in a mutually reinforcing loop.

Breaking this cycle requires a holistic response – one that addresses the design and governance of digital infrastructures while simultaneously strengthening the psychological resilience of audiences. Only by engaging both sides of this equation can societies hope to mitigate the influence of disinformation in the digital age.