[ad_1]
Press play to listen to this article
The European Union unveiled the world’s first plans to regulate artificial intelligence on Wednesday, doubling down on its role as a global rulemaker and challenging allies — namely the United States — to get on board.
The proposed rules, a top priority for Ursula von der Leyen as chief of the EU’s executive arm, aim to rein in “high-risk” uses of AI such as facial recognition or software to process job applications that, in the EU’s view, pose the greatest potential threat to society and individuals.
Europe’s proposal includes bans on practices that “manipulate persons through subliminal techniques beyond their consciousness” or exploit vulnerable groups such as children or people with disabilities. Other practices that are banned are government-conducted social scoring, which is a system introduced by China to measure an individual’s trustworthiness.
Real-time biometric recognition systems, such as facial recognition, will be banned for law enforcement purposes unless they are necessary to find victims in cases such as kidnappings, responding to terror attacks or finding criminals.
With the AI rulebook, the EU is intensifying a years-long plan to position itself as the world’s primary rulemaker for technology following the rollout of its comprehensive privacy rules, the GDPR, in 2018. This time, tech giants from Silicon Valley to Shenzhen are expected to have less than two years to bring their business in line with the AI rules, which also present a challenge to the administration of U.S. President Joe Biden.
While Washington has sought closer ties with Europe to counter China’s growing tech ambitions, so far the U.S. hasn’t followed the EU’s lead on AI or on privacy. The new rules — which will now snake their way through Europe’s legislative process — may widen the regulatory gulf between the two sides, even as Brussels pushes for closer coordination on its own tech priorities via a proposed Trade and Technology Council.
At the same time, the proposed rules set the EU apart from China on tech. The fact that the rules have singled out social credit scoring — a tool used mainly in China — is a signal that Brussels wants to avoid uses of AI for authoritarian surveillance.
“It sends a clear message to China that the social credit system is incompatible with liberal democracies,” said Maroussia Lévesque, a researcher at the Berkman Klein Center at Harvard University.
“There is no room for mass surveillance in our society,” said Commission Executive Vice President Margrethe Vestager.
“For Europe to become a global leader in trustworthy AI, we need to give businesses access to the best conditions to build advanced AI systems,” Vestager said.
For such a sweeping topic as AI, the new rulebook has been developed remarkably fast, only three years since the Commission launched its first AI strategy.
But that speed may come at the price of greater opposition to the fine print from civil society actors and EU lawmakers who must now parse the European Commission’s proposal. Already, campaigners are voicing disappointment with a final Commission draft many of them say is too friendly to industry, and gives governments too wide a berth to use AI for surveillance.
New era of regulation
One of the U.S.’s main anxieties is the pace at which China is developing AI technologies. U.S. policymakers have urged their European counterparts to collaborate — hoping to avoid ceding more space to Chinese tech giants like Huawei, Tencent and ByteDance, which owns the popular video-sharing app TikTok.
A recent report from the National Security Commission on Artificial Intelligence, chaired by former Google CEO Eric Schmidt, placed a strong emphasis on boosting U.S. AI capabilities, especially in the defense sector, to maintain its competitive edge. The report also recommended strengthening collaboration with allies to speed up the process.
But with the new rules, it could appear to the U.S. that Europe seems more concerned with protecting its citizens than keeping an eye on China, with whom the bloc recently signed an investment agreement. Schmidt expressed skepticism of the European project last month when he told POLITICO the EU’s ambition to create a “third way” to regulate artificial intelligence won’t work.
Late last year, von der Leyen proposed a transatlantic “AI accord” with the U.S., and Europe is keen to signal its “third way” does not pitch it against Washington.
The EU might well find itself an ally in the U.S. Federal Trade Commission, which will likely see Big Tech critic Lina Khan appointed as one of its commissioners.
The FTC also recently published guidance for companies that recognized the various deceptive practices that have become common with AI, such as selling products that don’t work, or systems that don’t do what they claim. Elisa Jillson, an attorney at the FTC, wrote that companies should hold themselves accountable or “be ready for the FTC to do it for you.”
These are signs that the U.S. is entering a new era of regulation, said Meredith Whittaker of the AI Now Institute at New York University.
“It remains to be seen how they use that power, but we are seeing at least in that agency a turn to a much more offensive stance for Big Tech,” said Whittaker.
Pass or fail
What gives the Europeans hope that the U.S. might play along is that they’re getting there first.
The bloc’s rules on data protection are now seen as the gold standard, prompting other countries to adopt similar rules. Some U.S. states, such as California, have adopted similar rules, but the country is a long way away from a federal privacy law. The European Commission hopes U.S. companies eager to continue catering to Europe’s market will comply.
Europe also has to convince tech companies its rules are worth following.
“If the European system is perceived to be slowing the uptake of AI, that may not be what other markets decide to do,” said Guido Lobrano, tech lobby ITI’s vice president of policy.
Moving first also doesn’t mean Europe’s proposal will stick.
Two EU officials who helped to draft Europe’s privacy standards expressed doubt that Brussels would be able to set the world’s de facto rules for artificial intelligence. They spoke to POLITICO on the condition of anonymity because they were not authorized to speak publicly about Brussels’ AI proposals.
The EU worked on data protection rules when nobody else was, one of the officials said. That’s not the case with AI. The U.S., China and other non-EU countries are eagerly pressing their claims for how the technology’s standards would be rolled out worldwide. That competition would make it hard, if not impossible, for the EU to run the board on AI rulemaking.
Not so different
For certain technologies, such as facial recognition, the EU and the U.S. might have very different narratives, but they converge in implementation, said Harvard’s Lévesque.
“The differences are not as big as we think between the American approach and the European approach,” Lévesque said. In the U.S., “many local governments are experimenting with bans or moratoriums … for government use of biometric surveillance and some of these measures are more stringent than the EU regulation,” Lévesque continued.
Alexandra Geese, a German Green MEP, said both the EU and the U.S. prioritize human rights and nondiscrimination. “These are the values we share. Most of the research we have about the discriminatory potential of AI that the European Commission is trying to regulate, comes from the U.S.,” Geese said.
Daniel Leufer, of digital rights group Access Now, said the narrative that the EU is the only one regulating risky AI technologies is not correct.
“The Portland ban on facial recognition is absolutely world leading … The EU following in the path of the Portland ban with its prohibitions on facial recognition will strengthen other local homegrown initiatives,” Leufer said.
“I’m sure Eric Schmidt won’t be happy about it, but you should be waffling the feathers of the right people,” he added.
Mark Scott contributed reporting.
This article is part of POLITICO’s premium Tech policy coverage: Pro Technology. Our expert journalism and suite of policy intelligence tools allow you to seamlessly search, track and understand the developments and stakeholders shaping EU Tech policy and driving decisions impacting your industry. Email [email protected] with the code ‘TECH’ for a complimentary trial.
[ad_2]
Source link