[ad_1]
Social media giant Twitter tops US tech firms applying an EU code to tackle disinformation – even if it does so only partially.
“Nobody has really fully respected the code,” Thierry Breton, the EU’s internal market commissioner, told reporters on Wednesday (26 May).
“I would tell if you somebody had. But from Google, Facebook, Twitter, Microsoft and TikTok, one did better than the others.”
EUobserver was informed it was Twitter. All five had signed up to the voluntary 2018 code.
But given the overall poor results, the code is now being revamped.
The reform comes at a time when Covid-related conspiracies are pushing some against getting vaccinated, while others push bogus cures.
Speaking alongside Breton, EU commission vice-president Vera Jourova said the disinformation has put people’s lives at risk.
Twitter in March had suspended 149 unique accounts and removed 5,371 pieces of content, which violated their Covid-19 misinformation policy.
Facebook says it pulled 620,000 pieces of content on Facebook and Instagram globally, and another 52,000 in Europe, over the same period. Google removed 30,000 YouTube videos in the last quarter of 2020.
On Wednesday, Jourova announced the commission’s latest proposal for a beefed-up code of practice on disinformation.
The plan is to reach an agreement before the end of the year and then have it embedded into the Digital Services Act (DSA).
The DSA is set to come into force in 2022, allowing for greater oversight and possible sanctions for firms that do not follow the code.
“This is fundamental,” said Luca Nicotra, a campaign director at Avaaz, as US activist NGO.
“I think they are going all in on this and it’s really exciting,” he said – cautioning that the commission’s proposal could still be watered down later on.
Aside from the DSA embed, the code presented by the commission also introduces other novelties.
Among them is a completely new focus on algorithmic accountability and transparency.
The aim is not to reveal the algorithmic source code.
Platforms will instead have to prove they are making changes to prevent the spread of disinformation.
“We would like them to embed the fact-checking into their system,” said Jourova.
It means platforms will not have to decide what is disinformation or misinformation, she said.
“We are deeply convinced that there should be no one authorised to be the arbitrator of the truth,” she added.
The new code would require platforms to tackle disinformation across different languages.
It also introduces other measures as well, requiring them to provide more data for researchers, tackle political adverts and curb election manipulation.
“Disinformation is still something that sells well, so we want to engage also the advertising industry not to place the ads next to disinformation,” said Jourova.
Users will also be encouraged to flag harmful content. And anyone whose content is removed, can appeal.
The commission wants to monitor oversight, along with the EU’s foreign policy branch EEAS.
The European Regulators Group for Audiovisual Media Services (ERGA) and the European Digital Media Observatory (EDMO) would also help monitor.
The commission then plans to adapt rules in November on political advertising with an aim to stop foreign interference.
“We are working very closely with Josep Borrell [EU foreign policy chief] on this subject,” noted Breton.
[ad_2]
Source link