[ad_1]
The United Kingdom announced Tuesday that it would fine tech companies like Facebook and Twitter up to 10 percent of their global revenue if they failed to stop illegal and harmful content from reaching their online users.
Under the proposals, social media companies, internet messaging apps and almost all forms of digital services, including search engines, where people communicate with each other will fall under the scope of the new rules which are expected to be tabled in the U.K. parliament next year.
Executives may also face criminal sanctions if their companies fail to uphold a so-called duty of care for their users — a mandate that will require firms to remove and limit the spread of illegal content like terrorist or child abuse material. The largest platforms will have to go even further to assess what “legal, but harmful” content like COVID-19 disinformation is allowed on their sites. Failure to act may lead to fines of up to £18 million or 10 percent of a firm’s global revenue, whichever is higher.
“Today Britain is setting the global standard for safety online with the most comprehensive approach yet to online regulation,” said British Digital Secretary Oliver Dowden. “We are entering a new age of accountability for tech to protect children and vulnerable users, to restore trust in this industry, and to enshrine in law safeguards for free speech.”
Britain’s lead may be short-lived.
Who defines harm
Later on Tuesday, the European Union will announce its own content moderation overhaul, known as the Digital Services Act, which will impose fines of up to six percent on companies that do not meet new obligations, including removing illegal content like hate speech. Both the U.K. and EU are striving to out-toughen the other in the face of the world’s tech giants, with Britain keen to flex its regulatory muscles ahead of its official departure from the 27-country bloc on December 31.
First proposed in 2019, the British legislation was originally expected to encompass issues like cyberbullying and content promoting self-harm. The suicide of 14-year-old Molly Russell in 2017 had prompted then-Digital Secretary Jeremy Wright and Health Secretary Matt Hancock to pledge action against such online harms.
But three years later, the new rules has strayed away from directly addressing such content. Instead, the government is “progressing work with the Law Commission on whether the promotion of self-harm should be made illegal,” the proposal said.
The U.K. similarly has ducked responsibility for defining what constitutes harmful content, an ambiguous term that encompasses online material that while legal, may still be problematic. Tech companies had urged lawmakers to determine what should be included in that definition, but in its proposals Tuesday, London said it would be up to the largest platforms to define it for themselves.
The new rules will apply to a broad range of digital services, though news websites — which often have comment sections that allow readers to discuss hot-button topics — will be exempt. That follows tough lobbying by publishing groups who feared that they could face hefty penalties or onerous regulatory costs associated with policing their online users. Some forms of online advertising, including paid-for ads that appear on social media, however, will be part of the content rules.
The U.K.’s Office of Communications, the national regulator, will oversee the new regime, and have powers to both enforce fines and block digital services that fail to comply with London’s push to remove the most harmful material from the internet.
“We are giving internet users the protection they deserve and are working with companies to tackle some of the abuses happening on the web,” Priti Patel, the country’s home secretary, said in a statement.
In a sign of the difficulties that may await the U.K. government, both online campaigners and industry groups voiced their concerns about the new proposals.
Harriet Kingaby, co-chair of the Conscious Ad Network, a trade group whose aim is to stop advertising being associated with hate speech, fraud and other online harms, said the industry and tech platforms should not be regulating themselves when it comes to defining harmful content.
“For meaningful change, we also need governments to introduce systemic reforms which protect the individual consumer, advertisers, and society as a whole,” she said in a statement.
British tech groups also highlighted the potential unintended consequences of forcing digital firms — both big and small — to police what was shared on their services. Dom Hallas, executive director of the Coalition for a Digital Economy, a trade group of mostly smaller British startups, said it was unclear how the U.K. government’s new rules would make the internet safer, adding that the greater regulatory scrutiny could tilt the scales further in favor of larger firms.
“Until the government starts to work collaboratively instead of consistently threatening startup founders with jail time it’s not clear how we’re going to deliver proposals that work,” he added.
[ad_2]
Source link