[ad_1]
Last month (21 April), the Foreign Affairs Committee of the Dutch Parliament had an online call with Leonid Volkov, Alexi Navalny’s chief of staff.
Or so the parliamentarians thought. It turned out that they may have been talking to a deepfake version of Volkov.
This could be a momentous event: The first time that deepfake technology is abused to interfere with high-level politics in the EU.
There appears to be a growing amount of political cheapfakes – videos that are manipulated in simple, less technically sophisticated ways. For example, videos may be shared out of context or simply slowed down. Although they are easy for experts to verify, cheapfakes may seem real to the average person.
The best-known example is the 2019 video of Nancy Pelosi appearing to be drunk. The video was slowed down, so her speech seemed slurred.
Deepfakes pose a different level of threat. They may be so skilfully produced that even experts cannot say with certainty if they are real or not.
Despite many fears, deepfakes did not cause any issues in the US presidential elections. But this is no reason for comfort, given the high level of sophistication and decreasing barriers to develop deepfakes.
At the Berlin-based Democracy Reporting International NGO, we talked to many industry experts, some of whom think it is a matter of months before the deepfake threat materialises, while other believe it will still take a few years.
All agree that the development of face swapping technology is advancing rapidly, most notably in film, gaming and apps. For example, the technology may allow actors speak convincingly in other languages or to allow gamers to take on different, possibly famous, personalities.
As we should know by now, any new technology with benign intentions may be abused for nefarious purposes.
Once deepfakes enter the market of political disinformation, the problems we had now, mostly text-based false news, may look like child’s play. “It’s the calm before the storm” is how an industry insider described it.
Videos speak to the human brain in a much more immediate manner than text – the ‘I have seen it with my own eyes’ phenomena.
And worse, the pure possibility to manipulate video, may cause anything to be questioned. A genuine video can be dismissed as deepfake or a manipulated one trumpeted as genuine.
Over time the public attitude may become “disbelief by default”, as Sam Gregory of the NGO WITNESS noted.
What should be done? The most promising technical solution focuses on technologies that could mark content at the moment of production with something like a watermark. If it is manipulated afterwards, the watermark would be affected. Any user could easily see that a video is not authentic.
‘Content Authenticity Initiative’ may not be enough
Some major firms, as well as civil society and media organisations, have teamed up under the Content Authenticity Initiative to create a standard for digital content provenance.
This is a good start, but many more firms should get involved and civil society should to apply to join, making sure global perspectives are reflected. We need to be careful, for example, that provenance tools cannot be abused to detect brave citizen journalists who document human rights abuses.
We can do more in the EU as well. Positively, the European Commission addressed the threat in the recently-unveiled proposal for an AI Act proposed in April.
The draft obliges users of AI systems that generate or manipulate visual content to disclose when image, video or audio content has been manipulated through automated means.
This is a start, but most disinformation actors are unlikely to fall under the definitions of the act.
Ideally the problem would be addressed upstream, before AI systems start manipulating videos. The above-mentioned watermarking at the moment of production would offer that solution, especially if flanked by other measures like digital education.
In view of the geopolitical dimensions of tech regulation, the EU and the US should explore if there is room for a global initiative that would include a key tech producer like China. The point would be to agree on a system of watermarking of any video that is produced, so that subsequent manipulation can be detected.
Deepfakes can harm all societies and all governments equally, so hopefully all have an interest to prevent abuse. It is a problem that needs a global solution. If some companies do not play along and let unmarked deepfakes flourish, public confidence in online visual content will suffer, even if other companies work hard to prevent it.
The story in the Dutch parliament remains murky. Maybe Volkov was not imitated through a deepfake, but simply impersonated by a look-alike prankster.
But the mere possibility that it was a deepfake has already created much doubt and confusion. It was not a consequential story, but it may have been like the little breeze that announces a coming storm. It’s time to act.
[ad_2]
Source link