Beyond Fact-Checking: What Europe gets wrong about disinformation, and How to fix it

by Sophie L. Vériter, Leiden University, the Netherlands.

Misinformation and disinformation[1] have become one of the defining challenges of our time: they impact elections, influence public health, and are reshaping International Relations. In Europe, this issue has been mostly framed in security terms, with Russia being seen as the primary culprit of a problem assumed to be primarily of foreign origin. The European Union (EU) has thus been responding to this challenge with increasingly vast measures aimed to regulate Big Tech, protect its democratic resilience, and counter foreign information manipulation.

For example, the Digital Services Act mandates very large online platforms to mitigate the negative effects of disinformation on democratic processes, public security, and civic discourse (Arts. 34-35). In practical terms, this means that platforms such as Facebook and X are obliged to work with fact-checkers, tighten their content moderation, and adjust their algorithms to avoid generating content that the European Commission considers to be harmful, even when it is technically legal. More recently, the EU banned Russian state-sponsored media from broadcasting in its territory, raising concerns about the unprecedented measure’s legal foundation, proportionality, effectiveness, and its potential to undermine freedom of information.

For those of us working in or studying this field, however, it’s clear that the problem is not only one of harmful content and foreign interference, it is one of trust, literacy, and civic engagement.

On the ground: firsthand encounters with information disorder

Over the past seven years, I have closely analysed the EU’s evolving approach to misinformation and disinformation throughout my MPhil and PhD research. My work draws on over 50 in-depth interviews with EU policy-makers, diplomats, and communications experts, alongside over 400 documents such as press releases, meeting minutes, speeches, and voting records. This data allowed me to critically assess how EU policies in that field have developed since 2015. What emerges is both promising and cautionary: while the EU has made bold moves in regulating platforms and countering foreign interference, it still often overlooks the deeper societal roots and democratic implications of information governance.

Having previously worked in EU public diplomacy, engaging young people across Armenia, Azerbaijan, Belarus, Georgia, Moldova, and Ukraine, I witnessed firsthand how disinformation works in practice: it spreads confusion and erodes trust. I was confronted daily with contradictory narratives about the EU, Russia, NATO, and nearly every aspect of international politics. My role was to support young people in navigating these competing stories, helping them access reliable information that could empower them and foster mutual understanding.

Later, during the COVID-19 pandemic, I spent time in Latin America, where I encountered similarly distorted information landscapes. I met people whose views of the pandemic, vaccines, and global institutions were shaped by local TV and social media. Many expressed deep distrust in public health authorities, pointing to widespread corruption and elite capture—concerns that, in some cases, were hard to dismiss. Back then, I still saw Europe as a bastion of public trust and high-quality journalism. But recent events, particularly Western governments’ and media’s response to the genocide in Gaza, have made it harder to defend the EU’s moral authority or the liberal international order it seeks to uphold. The contradiction between its proclaimed values and their selective application undermines both its legitimacy and its credibility. For media literacy professionals, this inconsistency complicates the task of fostering trust in democratic systems and journalism.

The paradox of trust and control in democratic societies

These experiences led me to a crucial realisation: not all distrust in government or media is irrational. In fact, in many cases, it is warranted. People rightfully turn away from institutions when information governance is absent, corrupt, or overly controlling. Instead of promoting trust for its own sake, democracies should focus on fostering people’s capacity to make sound judgments about whom and what to trust. Political scientist Pippa Norris has shown that societies benefit most when citizens can accurately assess the trustworthiness of institutions. The goal, then, is well-informed scepticism grounded in transparency and accountability.

In contrast, recent years have seen the EU adopt a more securitised and strict approach to information governance. It has increasingly focused on regulating platforms and penalising “harmful” content. While some interventions may be necessary to reduce viral falsehoods, this shift also empowers political institutions to set the boundaries of legitimate speech, a power that, if left unchecked, risks doing more harm than good. Measures designed to combat misinformation can easily become tools to suppress dissent or uncomfortable truths, especially when applied unevenly or opaquely. Worse still, they risk legitimising similar (often more repressive) approaches in authoritarian contexts, where “fake news” laws are routinely weaponised against opposition.

If the EU continues down this path, it risks alienating its own citizens, losing trust, and undermining its global soft power. By responding to the problem of disinformation with control rather than democratic openness, it may end up reinforcing the very dynamics of distrust and democratic backsliding it claims to counter.

What I have learned through these experiences—across different continents, conflict zones, and crises—is that the real challenge lies not only in correcting falsehoods but in rebuilding the conditions for trust to flourish. In a fragmented media ecosystem where people live in different “information universes”, each believing the others to be misinformed or manipulated, fact-checking and counter-messaging are necessary but not sufficient. We need to invest in long-term strategies: civic education that builds critical thinking, participatory governance that fosters inclusion, and media ecosystems that earn public trust by being transparent, independent, and accountable.

Lessons for the media literacy field

As my research has shown, addressing the root causes of misinformation demands long-term, systemic approaches rooted in democratic values. Here are four key lessons that emerged from my PhD thesis and which are especially relevant for media literacy professionals, educators, civil society, and policy-makers:

  1. Prevention is more powerful than reaction

Many European measures focus on reacting to harmful content (removing it, labelling it, or penalising platforms). But research shows that preventive approaches are more effective in the long run. Building citizens’ ability to critically assess information and engage with complexity before falsehoods take root helps to inoculate societies against manipulation. Investing in preventative infrastructure such as media literacy, civic education, pluralism, and digital inclusion yields stronger and more sustainable results than crisis-mode reactions.

  • Media literacy needs serious institutional support

Across the EU, media literacy initiatives often depend on time-bound projects led by overstretched NGOs or educators. While these initiatives are crucial, they rarely have the long-term support needed to scale or sustain impact. A key takeaway from my doctoral research is that media literacy should be treated as a public good, not a quick fix. This means integrating it into school curricula, public service broadcasting, and lifelong learning systems, backed by predictable, multi-year funding and clear European policies.

  • Media literacy as a pillar of democracy

Too often, media literacy is framed as a defence against foreign threats or manipulation, an instrument to “protect” citizens from hostile actors. While there is some merit to that, this frame can backfire by reinforcing suspicion, undermining openness, or stigmatizing certain groups. Instead, media literacy should be seen as a democratic tool empowering people to participate in public life, understand diverse perspectives, and engage constructively across differences. Media literacy must become an integral part of civic life.

  • Public participation must be part of the solution

Efforts to counter misinformation often leave the public out of the loop, treating citizens as passive recipients of regulations and policies with major impact on their information ecosystem. However, my research shows that trust cannot be built without people having a voice in the system itself. Participatory governance mechanisms, such as citizen panels, deliberative forums, or co-designed media policies, can help rebuild legitimacy and foster shared responsibility. We must design information governance with people, not just for them.

Conclusion

We are living through an age of profound information complexity. The stakes are high, not just for Europe and the West, but for democracy itself. While the EU has taken important steps to regulate platforms and counter foreign interference, its approach remains overly securitised, top-down, and reactive. If it hopes to strengthen democratic resilience, the EU must move beyond controlling narratives toward co-creating the conditions for trust to emerge from the bottom up, and hopefully incentivising others to follow through.

This means investing in participatory, preventive, and people-centred approaches: embedding media literacy in public institutions, fostering informed scepticism rather than blind trust, and ensuring that everyone has a voice in shaping the information systems that govern their lives. Misinformation is not only a technological or geopolitical challenge, it is a civic one, and the solutions must be democratic and inclusive.


[1] See conceptual clarifications here. In my thesis, I argue that differentiating between misinformation and disinformation (based on intentions and “bad” actors) is not actually really helpful. Erroneous beliefs spread in ways that we do not fully understand, and scholars disagree all the time about their impact on society. Misinformation often propagates through well-intentioned people who want to enlighten others by sharing information that they genuinely believe to be correct. And anyways, intentions are subjective because what one perceives as “intent to harm” may be perceived as “intent to protect from harm” by someone else.

Sophie L. Veriter is a researcher at Leiden University specialising in information challenges and their governance. Her work explores how political narratives shape European Union policies in this field, and their consequences for democracy. Previously, Sophie worked as a public diplomacy consultant for the EU. She is on the editorial board of The Hague Journal of Diplomacy and recently published in the Journal of Common Market Studies. She blogs at www.sophiepomme.com.