[ad_1]
Peter Cunliffe-Jones is a visiting researcher at the University of Westminster and the Founder of Africa Check.
When it comes to tech regulation, what happens in Europe doesn’t usually stay in Europe. Legislation cooked up in Brussels has a way of becoming a de facto standard for governments around the world looking for off-the-shelf solutions to the challenges of the digital age.
With the European Union’s landmark proposal on fighting misinformation — the Digital Services Act (DSA) and its accompanying Code of Practice on disinformation — that’s bad news. The approach embraced by Brussels simply doesn’t work, in Europe or anywhere else. Not only does it fail to address the harm from misinformation, our research suggests it risks doing real damage of its own.
The poster child of the EU’s tech heft is its data privacy law, the General Data Protection Regulation. “Since the adoption of the GDPR, we have seen the beginning of a race to the top for the adoption or upgrade of data protection laws around the world,” according to Estelle Massé, a policy analyst with digital rights campaigners Access Now.
The Protection of Personal Information Act (POPIA), which came into effect in South Africa in 2020, is often compared to GDPR and is expected to be brought more closely into line with it over time. Kenya’s data privacy law is also “largely modelled on” the EU regulations, (though not enough for its critics), analysts say.
That’s great in the case of the GDPR, where the model is broadly a good one. It would terrible if the same thing were to happen with the DSA and the Code of Practice.
Our research into the way misinformation causes harm suggests the two will fail to achieve one of their main goals: reducing the negative effects of misinformation.
Complex legislation though it is, the DSA’s approach is a simple one. It boils down to ordering tech companies to promptly remove “illegal content” once it has been identified or signaled to them, or face major fines if they don’t.
The accompanying Code, first introduced in 2018 and now being updated applies the same principle to misinformation: requiring companies to either remove or demote content deemed false and accounts that promote it.
The problems with the DSA, the Code of Practice, and similar models for tackling misinformation and disinformation are threefold:
First is the responsibility or license the laws give to privately-owned tech companies to decide, behind closed doors, what constitutes harmful content. Few object to takedowns of child pornography, terrorism-related material or hate speech. But all of those are quite well-defined. What counts as misinformation or disinformation is not, and working out what is harmful is harder still.
Second, even if the tech firms do identify harmful misinformation in way the public would agree with, simply forcing them to take it down after the fact does not reverse the harm caused, like other more proactive approaches, such as teaching misinformation literacy and fact-checking can.
Third, while the DSA and the Code are presented as a solution to harmful misinformation, it offers no answers to the wider problems of the information disorder (a technical term for the witting or unwitting sharing of falsehoods). As Christine Czerniak, technical lead of the World Health Organisation’s team fighting Covid-19 information disorder, said the “infodemic” is “a lot more than misinformation or disinformation.” She added she was speaking in general terms, not specifically in relation to the DSA or other proposals.
Any approach to disinformation must address why people are sharing it in the first place.
In addition to not working, the takedown approach can itself do actual harm. Czerniak noted that it has the potential to escalate polarization or making it hard for public health teams to hear people’s concerns and respond to them.
Bad laws intended to halt disinformation can be used to limit public debate. When we examined the laws of 11 African countries, we found that the numbers of laws targeting “false” information doubled between 2016 and 2020, spurred by and using the language of the crackdowns in Europe and elsewhere.
The laws provided vague or no definitions of what counts as “false,” or how “harm” should be proven, but put in place heavy fines or jail time for those who transgress. Unsurprisingly, most of those punished under these new laws have been political opponents and journalists.
If the EU wants to get serious about tackling harmful misinformation, at home or abroad, our research suggests it would need to take a different approach.
First, the EU and national governments should agree to a transparent approach to content moderation, with common definitions and standards of evidence, and a preference for correcting misinformation rather than simply censoring it.
Second, European education systems need to rethink their approach to teaching media literacy. The sort of broad media literacy taught today across much of Europe is not as effective at reducing susceptibility to false information as misinformation literacy — media literacy focused on specific misinformation knowledge and skills.
Third, national governments need to take measure to counter false claims by domestic politicians in their official capacities, one of the most dangerous forms of disinformation. The EU cannot mandate practices in national parliaments, but the European Parliament could show a lead — requiring MEPS and officials to correct misleading statements they make in parliament. This is not so outlandish. It is required of ministers in the UK, for example.
Finally, the best way to counter misinformation is with information. When it comes to topics that are particularly vulnerable to misinformation, it is crucial that official sources provide citizens with a place they can find the real facts.
If the EU were to put in place a measured, effective approach to fighting misinformation, it would have the potential to do a lot of good for the world.
[ad_2]
Source link