Countering Misinformation in the EU: My Doctoral Speech

I had the opportunity to give a short speech at the start of my PhD defence ceremony, to introduce my dissertation “Countering Misinformation in the European Union: Origins, Evolution, and Prospects”. This speech is written below as it was delivered on Tuesday 3 June at Leiden University. 

Ladies and gentlemen, good morning.

Thank you for being here today.

My doctoral thesis explains why EU policies against misinformation emerged the way they did, how they evolved, and the lessons about information governance deriving from this case study.

Before I continue, allow me to explain the difference between misinformation and disinformation, the accidental and intentional spreading of false information. For example, when someone mistakenly shares incorrect vaccine information, that’s called misinformation. When state actors deliberately spread lies as propaganda, that’s considered disinformation. These concepts heavily influenced how scholars and policymakers think and act about information challenges. The European Union, for the last decade, has primarily focused on debunking and obstructing propaganda spread by the Kremlin. It has done so because it says that disinformation constitutes a security threat for society. Lately, it broadened the scope of its efforts, for example by obliging platforms to work with fact-checkers.

One of the key points I make in my dissertation is that differentiating between misinformation and disinformation (based on intentions and “bad” actors) isn’t actually helpful. Erroneous beliefs spread in ways that we do not fully understand, and scholars disagree all the time about their impact on society. One thing is certain though, misinformation spreads regardless of intentions. First, studies show that beliefs are established depending on how they fit pre-existing assumptions. Second, information goes viral thanks to algorithms designed to prioritise shocking content because it keeps users engaged on their platforms. So, misinformation spreads through well-intentioned people who want to enlighten others by sharing information that they genuinely believe to be correct. And anyways, intentions are subjective because what I perceive as “intent to harm” may be perceived as “intent to protect from harm” by someone else.

Trying to decipher between good and bad, benevolent and evil actors, or beneficial and harmful content (what the EU has been doing) is a slippery slope. And we can’t afford moral paradoxes in a world where actors thrive on confusion to gather more power.

Sophie Veriter speaking at a podium with the seal of Leiden University

I’m sure you are all familiar with the adage “divide and conquer”.

Having researched misinformation for the past decade, I’ve often thought: “This person lives in a different universe!”. Technically, they live on the same planet, but they do exist and construct their identity and beliefs based on very different information universes. They live in countries with different media, politicians, a different justice system, different education system, and even a different version of the Internet.

People and organisations who seek more power — like politicians and big tech companies — capitalise on these divisions. The “us vs. them” narrative is popular because it taps into deeply rooted psychological mechanisms that make us feel better when we are surrounded by people who think like us, and more hostile towards people who are different from us, especially in times of fear and uncertainty.

But instead of “divide and conquer”, I think we could “unite and liberate”. So, another key point I make in my thesis is that structural measures that seek to build societal resilience to information manipulation are much more appropriate, in the long term, to tackle misinformation, propaganda, and opposition between information ecosystems. It’s a better objective for public good.

Ideally, we should all be able to critically consume information and call bullshit where it is due, whether that comes from Russia, China, Hungary, or the European Commission, regardless of the intentions of its spreaders, and regardless of its consequences for political regimes or corporations. That is the ultimate public goal. But it’s not easy to empower everyone to understand each other and to collaborate beyond what has ever existed.

The problem with our division into separated, impossible-to-reconcile universes, is that it makes accountability and transparency incredibly difficult. The EU has created new reporting obligations for online platforms, but the impact on society is still unclear. Who, aside from civil servants, looks at those transparency reports? It’s not designed with citizens in mind. It’s also just not working. While the EU claims its trustworthiness is at an all-time high, other studies show that trust in public institutions is declining, half of EU citizens are dissatisfied with democracy and most of them are skepticalabout their governing elite. New European laws like the Digital Services Act have had so far no impact on the amount of harmful content online. Experts are concerned that we are witnessing the end of the “liberal world order”, as democracies and multilateralism are in decline worldwide.

What a mess, right?

Before my Masters studies, I worked as a consultant for the European Commission, implementing its public diplomacy programme in the EU’s eastern neighbourhood. I had the chance of meeting thousands of young people from all horizons who were globally able to discern truth from fiction, to unite around common principles of freedom and justice, and acknowledge their own biases by connecting with people who viewed the world differently. We were creating opportunities for those connections to happen and it was honestly one of the best experiences of my life.

Still, it never felt like it was enough, as the post-Soviet space has continued to be riddled with violent conflict. Back then, I remember thinking: “Why was the EU not doing more to fight misinformation and propaganda?”. And as I noticed that more and more people were hired to tackle this problem, I then wondered: “Wait, how did that wake-up call happen?”.

This experience is the motivation behind my dissertation, which analyses how the EU has responded to misinformation. I look at the origins of those policies, their development, and propose some ideas to improve how we think about them. To do that, I identified big milestones in the development of EU policies against misinformation and retraced, like a detective, who did it, how they did it, where it happened. The evidence I used to build my case comes from interviews that I conducted with 50 people and over 400 documents like laws, voting records, and meeting minutes that I analysed with a qualitative data analysis software.

When I started working on this subject, I thought I was going to uncover how the EU had become this grand strategic actor, showing leadership in pushing for new measures to counter Russian influence operations. As it is often the case when starting a new research project, I was completely wrong… Perhaps one of the most valuable lessons I learned during this PhD is that there are no simple explanations to complex issues. There are so many things I learned while speaking to diplomats in Brussels and The Hague, connecting the dots between politics and media, and while reading hundreds of books and articles on misinformation, from psychology and political science to physics and network science.

I could not possibly tell you everything, but I chose to highlight four fundamental empirical findings in my propositions:

  1. When I looked for the origins of misinformation policies, the very first impetus to do more at the EU level, I found that the Baltic states and particularly Latvia were at the forefront of a coalition of states who lobbied strategically and actively to counter Russian propaganda.
  2. In policy circles and in the media, misinformation has been framed as a security threat. This is understandable, but problematic because the procedures of decision-making in the field of foreign and security policy allow for less scrutiny and democratic accountability. Security is a very subjective concept that evolves with time. Political actors can use this framing strategy to hide certain information. It is the case in Hungary, for example, where the state of emergency is utilised to silence critics of the Orban regime.
  3. The broadening of EU measures, from fact-checking to media bans, means that our information universe is restricted not only for what’s illegal, but also for what political institutions consider “harmful”. This is not optimal for an organisation like the EU, which preaches democracy and freedom. It creates a paradox and damages its credibility as a soft power on the global stage.
  4. This also means that citizens are not involved in decisions about a fundamental pillar of democracy, the public sphere. Here lies the key to boost EU legitimacy in its information governance: to involve both independent authorities and citizens in decisions about what shouldn’t be allowed in our information universe.

I hope that this introduction sparked your curiosity and that you will enjoy the remaining time of this defence to learn more about this fascinating and fundamental issue for society.

Thank you.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Related Posts