Countering Disinformation supported by Artificial Intelligence

by Jochen Spangenberg, Deutsche Welle / vera.ai, Germany.

Meet vera.ai

vera.ai is a research project co-funded by the European Commission’s Horizon Europe programme. Furthermore, it receives contributions from both the Swiss and UK authorities responsible for research funding in their respective countries.

In short, the project’s aim is to develop novel AI- and network science-based methods that assist verification professionals throughout the complete content verification workflow. In other words: the project aims to use advances in technology to support journalists as well as those active in related fields (e.g. human rights investigators) in the process of analysing and verifying digital content. This ranges from text to audio-visual material, and also includes aspects such as network analysis or the spreading of disinformation in a coordinated fashion (called Coordinated Inauthentic Behaviour), to name but a few of the research areas that are covered in the project.

vera.ai concept diagram

Bringing together different skills, expertise and roles

The vera.ai project consortium is made up of 14 partners. They combine and contribute a variety of skills and expertise in many different domains and sectors, bringing all this into the project.

Broadly speaking, project partners can be clustered into those that are working on the development and provision of tools, technologies and services, while the other group are working with what is being developed, to trial and test it, and guide the development work with the gathering of user needs and requirements, assessing and describing workflows and best practices, and such like.

Building on past successes – and bringing this further

vera.ai is in the fortunate position that it can build on the work and outcomes of past projects and activities. In particular, the work of the ‘forerunner projects’ InVID and WeVerify need to be mentioned.

While InVID focused in particular on the analysis and verification of (audio)visual content, WeVerify was wider in scope. Of great benefit to vera.ai is the fact that all WeVerify project partners are also active in vera.ai. They have been supplemented by additional partners with complementary skillsets, profiles and expertise, making the vera.ai consortium a very versatile project and team that brings together a large skills base and know-how. All this with one overriding aim: to support in countering disinformation in an effective and results-driven, user-centric way.

Expected and existing outcomes

As outlined above, the vera.ai project has many aims and ambitions and can count on a versatile group of experts active in different domains. The same can be said about what is being tackled and worked on. Expected outcomes are wide in scale and scope, some of which are given below. It also sheds light on how the project consortium is making project results available.

To start with, project partners are working on individual technologies, tools and services that support in the analysis and verification of digital content items. The statuses range from concepts to prototypes to rolled-out services – some coming out of previous activities and projects, being enhanced further in vera.ai. This includes, but is not limited to, e.g.,

  • an image verification assistant,
  • a deepfake detection service,
  • a location estimation service,
  • a rumour veracity classifier,
  • a video analysis and keyframe extraction service,
  • a “check GIF service” that compares an original image with a tampered one,
  • an OCR (Object Character Recognition) service,
  • network analysis and visualization services,
  • cross-lingual and multimodal near-duplicate search,
  • audio forensics tools and services,
  • multilingual credibility assessment,
  • a database of previously debunked content items,
  • and much more.

In short, and using scientific project language, vera.ai will provide multimodal trustworthy AI tools for the detection of deepfake, synthetic media and manipulated content and the discovery, tracking, and impact measurement of disinformation narratives and campaigns. This is to be done across social platforms, modalities, and languages, through integrated AI and network science methods.

Making project work and results available

There are various ways in which project outcomes as listed above are being made available to an interested audience. This includes

  • sharing research results in the form of academic papers and publications,
  • making tools and services available as so-called ‘stand-alone solutions’, mostly web-based (see for example a range of services including, among others, the Image Verification Assistant provided by CERTH-ITI),
  • integrating outcomes, if suitable, into commercial tools (e.g. into Truly Media, a commercially available platform for collaborative verification – used by news organisations and the EDMO community),
  • integrating outcomes into the so-called “InVID-WeVerify-vera.ai plug-in” (or “Fake News Debunker”), a freely available browser extension that is already used by 90,000+ individuals, optimized for Chrome and available for download in the Chrome store.
screenshot of the “InVID-WeVerify-vera.ai plug-in” (or “Fake News Debunker”)

The way ahead

Numerous results for the analysis and verification of digital content coming out of vera.ai and its predecessors are already available for the verification community. In vera.ai, some of these existing results are being extended and improved further on the one hand, while additional and completely new services are being developed and will also be provided on the other hand. Artificial Intelligence plays a vital role in all this.

Depending on suitability and respective exploitation opportunities, as many tools and services as possible will be made available to the community out there, using different avenues. This way, the vera.ai project consortium plays its part in the fight against disinformation.

More information and resources – stay tuned

For more information, and to stay updated about project developments and outcomes, check out the project website on https://www.veraai.eu/home.

Follow the project on Twitter via https://twitter.com/veraai_eu.

The vera.ai YouTube channel can be found here: https://www.youtube.com/@veraai_eu.

Academic publications coming out of vera.ai are listed on https://zenodo.org/communities/veraai/ and the project website.

Project presentations are also available on the project website.

vera.ai project coordinator:

Dr. Symeon Papadopoulos, Senior Researcher, Information Technologies Institute, CERTH (Mail: papadop (at) iti (dot) gr)