[ad_1]
Donald Trump’s banishment from social media has rekindled the political fight over who should police what people post on the internet.
For those wondering what comes next, don’t look farther than Europe where policymakers have been pushing for such content rules for years — and everyone from the European Union to France and Germany has tabled new laws aimed at keeping people safe online.
“Europe is the first continent in the world to initiate a comprehensive reform of our digital space,” Thierry Breton, the European Commissioner in charge of new proposals known as the Digital Services Act, wrote in POLITICO. “What is illegal offline should also be illegal online.”
The Digital Services Act, approved by the EU’s executive branch in December, foresees hefty fines if the likes of Facebook and Amazon don’t police illegal content and products on their global platforms,
Yet amid this self-congratulation, Europe’s track record in drafting online content rules has been haphazard at best. Its experience offers tough lessons for those outside the 27-country bloc, including in the United States where people are now clamoring for greater action by social media giants and lawmakers to take down the worst-offending material.
In places like Germany, strict rules around illegal content such as Nazi propaganda — and multi-million euro penalties if social media giants don’t follow them — has not done anything to curb harmful content that, while heinous, is not technically illegal. In France, comprehensive proposals that would have forced platforms to remove hateful and terrorist content within hours was overturned by local judges on freedom of speech grounds.
And even the EU’s latest attempt at legislation — which include mandatory risk assessments on how online content may become weaponized and potentially billions of euros of fines if companies flout the rules — may not have stopped the years of disinformation that went hand-in-hand with Trump’s outgoing administration.
The issue underlying all these efforts is a basic truth: determining what should be allowed on the internet is a difficult, if not impossible, challenge.
Any attempt to police online content must walk a tightrope between people’s right to say whatever they like, and others’ equal right to be protected while online. Any set of rules comes with inevitable drawbacks that will almost certainly inflame tensions, lead to unintended consequences and struggle to quell the wave of online falsehoods that have engulfed large parts of the web.
“Saying what is illegal offline should be illegal online is the easy part,” said Richard Allan, a member of the United Kingdom’s House of Lords and a former senior public policy official at Facebook. “But a lot of speech that’s unpleasant and offensive is not illegal. How do you deal with that?”
Broken system
Those difficulties have played out in real time as Europe has taken on the mantle of protecting its citizens from online harms.
Much of that attention has centered purely on illegal content — or material like terrorist propaganda that breaks existing rules. In Germany, officials passed the so-called NetzDG rules in early 2018 that demanded social media giants remove potentially illegal content, some of it within 24 hours of being notified, or face fines of up to €50 million.
In some ways, the law has worked. Facebook, Google and Twitter now routinely report on how much of such content they have taken down. But now, Berlin is again revamping its laws to force social media firms to proactively report potential hate speech, which is illegal in Germany, to law enforcement over concerns that people were still not safe online. The country’s rules still leave out much of the disinformation that remains rife online because almost all of that material does not break the country’s existing rules around hate speech.
In the United States, even rules on hate speech would prove highly controversial given the sweeping nature of the Constitution’s First Amendment enshrining free speech.
“NetzDG doesn’t address the real issue,” said Julian Jaursch, a disinformation expert at Stiftung Neue Verantwortung, a think tank in Berlin. “It’s the grey area of illegal versus legal content that’s the problem.”
It’s not that European policymakers don’t recognize that problem.
Brussels’ latest online content proposals almost exclusively focus on outright illegal content and counterfeit goods, a recognition that determining what type of harmful, but legal content to include is a legal quagmire. But three people involved in drafting the proposals stressed the plan left some wiggle room to police content that didn’t specifically break existing rules.
That included provisions — alongside mandatory auditing of how companies handle online material and powers for regulators to make dawn raids on their operations, if required — that outlawed “intentional manipulation” of social media platforms in ways that affected “electoral processes and public security.”
Those proposals may still be scrapped amid intense lobbying already underway in Brussels. Yet proponents argue they go beyond any other attempt to hobble the worst content online, and could provide a playbook for others to follow.
“This is the most aggressive form of platform accountability we’ve ever seen,” said Ben Scott, a former senior tech adviser to Hillary Clinton who now runs a lobbying outfit promoting greater tech accountability.
Too much power?
While Europe has spent years drafting these proposals, it’s still unclear if they will have any meaningful effect on online disinformation — let alone if others outside the bloc will follow suit.
Even on Monday, several EU leaders, including German Chancellor Angela Merkel, raised concerns that Twitter, as a private company, should not have the power to determine if the U.S. president should be allowed on its platform.
EU policymakers have proposed a voluntary designed to coax social media companies into taking greater responsibility for online disinformation, including coordination between regulators and firms to police online material.
The new rules, though, are not expected until 2023, at the earliest, and online content experts are skeptical if they will be able to stop the tidal wave of harmful content, particularly linked to the ongoing COVID-19 pandemic.
“Until we can audit how companies make decisions, it’s pretty meaningless,” said Claire Wardle, co-founder of FirstDraftNews, a non-profit organization that helps media outlets tackle disinformation, in reference to current policymaking responses.
A big question also remains about the U.S.’s willingness to take their cues from Europe.
Tom Wheeler, the former chairman of the U.S. Federal Communications Commission, said many in Washington remained skeptical of the EU’s efforts, in part because of the country’s First Amendment tradition and a reluctance to copy rules that come from outside of its borders.
Amid renewed discussions about how the U.S. should hold social media companies to account following Joe Biden’s presidential victory, Wheeler added was still years behind what the EU had proposed through its Digital Services Act because an ongoing feud between Republicans and Democrats has stymied progress on how best to police content online
“There isn’t a monopoly on good ideas,” Wheeler said. “We need to figure out when we should go to talk to people outside the U.S. who haven’t shied away from tough decision making.”
This article is part of POLITICO’s premium Tech policy coverage: Pro Technology. Our expert journalism and suite of policy intelligence tools allow you to seamlessly search, track and understand the developments and stakeholders shaping EU Tech policy and driving decisions impacting your industry. Email [email protected] with the code ‘TECH’ for a complimentary trial.
[ad_2]
Source link