How to deal with misinformation in East Asia and Europe

by Marion Bacher, Goethe-Institute, Seoul.

On October 28 twelve experts from East Asia and Europe met to discuss how to tackle misinformation and hate speech in their regions.

Rumors, lies and hate speech have always been part of the fabric of society. The advent of the internet and the rise of social media, however, have fundamentally changed the dissemination of information and its consumption. To better understand these dynamics and discuss how interventions against misinformation and hate speech are implemented the Goethe-Institut and the German Federal Agency for Civic Education (bpb) organised the online symposium “Facts and Contexts Matter: Media Literacy in East Asia and Europe”.

Dynamics of mis- and disinformation in East Asia and Europe

Research interest, both from academia and journalism, in the “threat” of mis- and disinformation have grown steadily across the world. The keynote speakers Masato Kajimoto, Hong Kong University, and Jeanette Hofmann, Freie Universität Berlin, agreed that the actual threat lacks empirical grounding. “We often assume that misinformation is a problem of the platforms, although a lot of false and misleading information comes from the political elite”, said Hofmann and Kajimoto added: “Misinformation is rather a symptom of polarization, inequality, hate, mistrust and other issues – not the cause.”

Both experts talked about the influence of group identity in sharing information: “If everyone was aware of tribal behavior, the situation might be better. We should include education on group behaviors and not focus on short, technology-centered education”, said Kajimoto. With his student-led newsroom, fact-checking project Annie Lab as well as the Asian Network of News and Information Educators (ANNIE) he has been focusing on identifying and creating credible quality content, an important element to fight misinformation according to the experts.

How does generative AI change information environments?

The AI models we have in place reflect regulations and ethics from where the systems are built. While Antonio Krueger, scientific director of the German Research Center for Artificial Intelligence, reflected generally on the power of generative AI, the other panelists spoke of local applications: in South Korea, the popular chat bot Iruda transformed from a hate bot to a decent conversational partner after introducing generative AI to the system. Sungook Hong, Seoul National University, explained how the tech start-up Scatter Lab established ethical principles in co-creation with the ICT Policy institute and Iruda’s users. Isabel Hou, Secretary General of the Taiwan AI Academy, introduced the fact-checking bot Cofacts, which runs on open-source technology and whose database consists of knowledge crowdsourced by active users.

State regulations vs. hands-off solutions

On August 25, 2023 the EU’s Digital Services Act (DSA) officially came into power. Tech giants like Google, Meta or Amazon are now being held to stricter standards of accountability for content posted on their platforms. “The problem is that it is up to the nation states to define illegal content and thus what platforms need to take down,” criticized Jeanette Hofman. This is especially crucial in states where courts are not independent. The DSA could end up as a powerful tool for EU countries that restrict the right to freedom of speech. Daisuke Furuta, editor-in-chief of Japan Fact-check Center, claimed that this may be one of the reasons the Japanese government is against platform regulations. “In Japan the study group on platform services of the Ministry of Internal Affairs and Communications recommended that the private sector should combat misinformation, rather than it being regulated by law.”

Dealing with misinformation and hate speech on the ground

In four different break out sessions the participants delved into innovative practices by organizations in South Korea, Japan, Taiwan and Germany.

Harnessing global perspectives for collective solutions

The last panel specifically looked at the connection between misinformation and hate speech. “In Germany the constitutional prominence of human dignity shapes our approach to hate speech”, said Mauritius Dorn, Institute of Strategic Dialogue. “In South Korea the definition focuses on discrimination. Hate speech means that people are excluded from mainstream society”, added Sookeung Jung, Policy Chair of the Citizen Coalition for Democratic Media (CCDM). Isabel Hou, Taiwan AI Academy, senses a strong focus in her country on disinformation that could be linked to hate speech.

The panelists agreed that a holistic approach is necessary in order to tackle hate speech. Approaches require good coordination and the engagement of the whole society. For Dorn it’s also about “innovating democracies” via new tools in the public interest: “We are in a state of multiple crises, so we need spaces where people can get fact-based information about the impact of issues on their lives and ask their questions without hate and agitation.” 

All in all, the conference showed that Asian and European countries have more in common when it comes to dealing with misinformation than perhaps initially assumed.

For more information on the speakers & the program ⤵️

Author

Marion Bacher works at the Goethe-Institute Seoul and is the project lead of “Facts and Context Matter”, which focuses on best practices on how to tackle misinformation and online hate speech in East Asia. The project is a cooperation with the German Federal Agency for Civic Education. Contact: marion.bacher@goethe.de.