Fake news, algorithms and democracy protection: can David beat Goliath?

by: in Law
Blog Fake news, algorithms and democracy protection

The massive spreading of online disinformation and deep fakes poses an ever-increasing threat to democracy and fundamental rights in the European society.

Fake news as a threat to democracy
The threat to democracy and fundamental rights in Europe is caused mainly by raising amounts of disinformation during electoral periods, brisk spreading of information through social media, targeting of vulnerable parts of population and (technical) difficulties in recognising their fake nature. Even though the detrimental effects of such misleading online content on democratic values of the European society are not disputed, the (non-)regulatory responses to these challenges have been remarkably diverse. On the one hand, the EU is hesitant towards immediate regulatory solutions and currently fosters self-regulatory approaches as pointed out in the Commission Communication on tackling online disinformation, the Report of the High-level Group on Online Disinformation and the a non-binding Code of Practice. On the other hand, recent German legislation aimed at combating manipulative information divided public opinion as to its effectiveness, likelihood of internet censorship and compliance with freedom of expression. Despite these controversies, other countries are equally considering taking the legislative route. In the Netherlands, where the 'nepnieuws' are not yet a major regulatory concern, according to the report of the Rathenau Instituut, a wrong identification of fake news provoked a legal action against the EU and led to initiatives to eliminate the EU anti-disinformation task force.

Demoting and diluting disinformation
Under the current EU legal regime, online platforms do not have an obligation to remove harmful legal content, such as disinformation, also known as fake news. Article 14(1)(a) of the E-Commerce Directive namely does not extend to such content because it specifically refers only to ‘illegal activity or information’. Moreover, the recent Commission soft law instruments equally do not prompt its removal. In the above-mentioned Commission Communication (p. 7-8), removal or blocking of disinformation does not appear among the objectives of tackling such disinformation. Since this field remains to be largely self-regulated – save for voluntary and non-binding acts such as EU Code of Practice – platforms have adopted various approaches towards combatting disinformation. These range from deploying (human or artificial) fact-checkers, flagging of disinformation, surrounding disinformation with truthful information, browser extensions to detect disinformation (for example Fake News Detector), or closing down fake accounts or bots spreading fake news. Demoting of such information in the social media feed is the strategy currently used by Facebook; Facebook, initially used a red warning flag, but later turned to demoting disinformation in the news feed. In addition, van der Linden and his colleagues suggest the use of ‘inoculation’ or pre-emptive warnings about disinformation.

Fighting fake news with algorithms?
Given the scope of news to be verified and the increasing amount of untruthful news, the Artificial Intelligence tools will gradually become to be the key instrument to detect and tackle disinformation, as also recognised in the abovementioned Commission Communication. To this end, the Commission proposes for example the use of cognitive algorithms to enhance reliability of search results (p.11). Indeed, the use of algorithmic tools to detect and combat disinformation could lead to improved efficiencies in combating disinformation. Automatic detection, demotion and information to the public about the fake nature of information could increase the speed and extent of combating fake news. However, it is not excluded that the use of such automated means could lead to inadvertently incorrect results and that truthful would be wrongfully labelled as disinformation. Moreover, automated means could be deliberately misused to downplay allegedly undesirable content such as condemnatory, challenging, surprising or distressing opinions (in this sense Commission Communication, p.8). This could lead to a potentially unjustified impairment of both freedom of expression of news creators and freedom of information of news consumers. Adverse impact on freedom of expression can in turn have a detrimental impact on deliberative democracy. An additional challenge with algorithmic detection of fake news lies in the circumstance that algorithms are ill equipped to make value choices and balance between competing values, such as protection of voters against manipulation on the one hand and freedom of expression on the other. Democracy and free elections can therefore not only be impacted by fake news themselves, but also by inaccurate decisions whether a news items are indeed disinformation.

The way forward: the importance of research and empowering the citizens​
The lack of comprehensive doctrinal and empirical research on these controversial issues poses a question of the most appropriate and the most accurate tools to detect and prevent online disinformation on national and on European level. Researchers need to examine whether and to what extent the EU Member States and/or the EU should adopt regulation aimed at detecting and preventing online disinformation with the goal of protecting democracy and fundamental rights, who should be the addressees and what should be the key characteristics of such regulation. Such regulation always needs to be informed by high ethical standards. Moreover, it is important to conduct research into which method of detection of disinformation is statistically the most accurate and the most effective, to be able to choose and combine a set of methods that will make our democracy manipulation-proof.

Finally, empowering citizens to be able to detect disinformation is an important element in this regard. Research community can empower them by spreading the results of the research to the broader society, and hence complement government actions in this domain. In the end, the citizens themselves (consciously or subconsciously) take the final decision whether they trust a particular piece of information or not and whether they trust the source of this information. Indeed, citizens are extremely vulnerable when it comes to disinformation because such news target their emotional response. However, even though the citizens are the weakest and the most sensitive target of disinformation, empowering them with campaigns such as the recent Dutch campaign aiming to make the citizens ‘media wise’ could considerably strengthen endeavors to limit the detrimental effect of disinformation. If empowering of citizens succeeds, the citizens could perhaps be metaphorically compared to historical David who beat Goliath. However, can the citizens (David) really beat the fake news phenomenon (Goliath) in the age of digital democracy?

  More blogs on Law Blogs Maastricht