[ad_1]
When China-linked networks of social media bots and trolls appeared on the global disinformation scene in 2019, most analysts concluded that their impact and reach were fairly limited, particularly in terms of engagement by real users and relative to more sophisticated actors in this realm, like the Russian regime. As many China watchers anticipated, that assessment now seems to be changing.
Several in-depth investigations published over the past two months by academic researchers, think tanks, news outlets, and cybersecurity companies have shed light on the evolution of disinformation campaigns originating in China. Some offer new insights on campaigns that peaked last spring, while others analyze more recent messaging, tactics, and accounts that have emerged since October 2020.
A close reading of these investigations points to several emergent features of China-linked disinformation campaigns – meaning the purposeful dissemination of misleading content, including via inauthentic activity on global social media platforms.
Collectively, the studies indicate that significant human and financial resources are being devoted to the disinformation effort; the overall sophistication and impact have increased; and linkages between official accounts and fake accounts are more evident, rendering plausible deniability by the Chinese government more difficult.
Disinformation is only one tool – and perhaps not the most important one – in Beijing’s sizable collection of instruments for global media influence, but Chinese authorities and their proxies are clearly working to increase its potency, and the process warrants close observation.
Persistence and Adaptation
One key takeaway from the recent reports is the persistence of the networks of inauthentic accounts that are being used to channel official messaging to global audiences. A February 21 report by the cybersecurity firm Graphika was its fourth focused on a network of accounts it has termed Spamouflage. Despite repeated takedowns by Facebook, Twitter, and YouTube following reports from Graphika or independent detection by the social media platforms themselves (sometimes just hours after the questionable accounts posted content), the networks and specific fake personas have continued to revive themselves.
Another study by the Crime and Security Research Institute at Cardiff University remarked on the structural complexity of a network of fake Twitter accounts linked to China that the authors had detected. It consisted of a series of almost autonomous “cells” with minimal links between them, which appeared to enhance the network’s resilience. The report notes that the behavior patterns are unusual and appear to have been designed to avoid detection by Twitter’s algorithms.
Researchers found additional signs of adaptation. Notably, Graphika detected experimentation with persona accounts, which look and behave like real people, even as the Spamouflage network continued to deploy hundreds of more obviously fake accounts.
In some instances, the persona accounts were new creations, while in others they appeared to have been real accounts that were stolen or purchased from the previous owners. Even as certain persona accounts were taken down, they revived themselves and evolved over time in an effort to improve engagement with local audiences in different parts of the world. One such Twitter account identified by Graphika was found to be posting primarily in Spanish in its third incarnation. This modus operandi involving persona accounts had previously been associated more with the Kremlin’s disinformation playbook than with Beijing’s.
Persistence is also evident in the longevity of certain messaging. One long-standing disinformation campaign emanating from both official Chinese government outlets and inauthentic accounts has been the promotion of a conspiracy theory that COVID-19 is a bioweapon developed in the United States and brought to China by the U.S. military in October 2019. PBS’s Frontline documented 24 digital stories mentioning the unfounded theory that were posted by the Chinese Communist Party-aligned Global Times, with the earliest posted in March 2020 and the most recent in early February this year. This was just one example of anti-U.S. narratives disseminated by state media and inauthentic accounts that continued beyond the Trump administration and into the Biden presidency.
Increased Efficacy in Reaching Real Users
The tactical shifts to date appear to have paid off. Graphika found that the persona accounts were especially effective at facilitating “genuine engagement” and emerged as a “main driver of impact” in the Spamouflage network, which as a whole appeared to broaden its reach and achieve greater success in prompting shares from real social media influencers in multiple countries. The Spanish-speaking account noted above generated posts that were shared by top Venezuelan government accounts, including that of the country’s foreign minister, as well as others with large followings in Latin America.
Further examples noted in the report include posts that were shared by politicians or technology executives in Pakistan, the United Kingdom, and elsewhere, including some with millions of followers. Two reports found that Russian and Iranian social media assets assisted in promoting the COVID-19 conspiracy theory and related China-linked posts, including in regions like the Middle East, where China’s media footprint is not as robust. With regard to the United States, the Cardiff University study included the example of a subsequently debunked video of someone allegedly burning ballots in Virginia, which was ultimately shared in early November by Eric Trump and garnered over a million views. The clip had apparently come to the attention of the user whose post Trump retweeted via two accounts in a China-linked disinformation network.
The reports cite several instances of journalists or local traditional media outlets in different countries – not just individual influencers on social media – unknowingly sharing disinformation on their own accounts, news websites, or television broadcasts. This enhances the credibility of the content and delivers it to a much wider audience. Reflecting the global nature of the phenomenon, the examples found included a Panamanian news channel with over 800,000 Twitter followers, a Greek defense publication, an Indian news website, an Argentinian journalist and former CNN anchor with 500,000 Twitter followers, newspapers in Finland and New Zealand, and a television station in Texas.
Coordination and Official Linkages
Conclusively attributing inauthentic networks of social media accounts to Chinese party-state actors is exceedingly difficult, even when they are clearly promoting Beijing’s preferred narratives or specific state-produced content. Nevertheless, signs of coordination and patterns of behavior indicating official backing are emerging with increasing frequency. Graphika found that fake accounts in the network it investigated had been amplified by Chinese diplomats on Twitter hundreds of times. The researchers acknowledge that the Chinese diplomats may have shared content from the network without knowing that these were fake accounts, assuming instead that they were genuine “patriotic” netizens. However, the timing and content of the material shared by the fake accounts tracked very closely with the activity of Chinese diplomats or state media outlets, even on globally obscure topics like President Xi Jinping’s visit to Shanghai in November to celebrate the anniversary of China’s “reform and opening up.”
Several features of the campaigns led separate researchers to conclude that they very likely enjoyed some Chinese state backing. First, the sheer volume, speed, and sophistication of the activity is noteworthy. Graphika found that between February 2020 and January 2021, the fake Twitter accounts of the network under study had posted over 1,400 unique videos. Many of these were reacting to breaking news events within 36 hours of their occurrence, suggesting significant resources and a degree of professionalism that would be difficult for ordinary users to achieve. Second, the Cardiff University research team mapped the timing of the posts from the Twitter network they analyzed and found that its activity closely matched working hours in China, even dipping during a fall holiday that is not widely celebrated elsewhere. Lastly, ordinary citizens who circumvent censorship to access and engage on Twitter have increasingly faced legal reprisals in China over the past two years, suggesting that those behind these networks likely had some tacit official approval if not active support.
Harmful Content
The content promoted by the fake accounts ranges along a spectrum from relatively benign to highly problematic. Some posts sought to amplify praise for China by highlighting a parade in Hubei province for COVID-19 medics, while others highlighted failures or accidents affecting the United States, including lightning strikes or downed drones in Syria. Another group of posts aimed to attack and discredit perceived enemies of Beijing, like the pro-democracy movement in Hong Kong, exiled billionaire Guo Wengui, or more recently the BBC.
Perhaps the most troubling content is that which could have serious public health and political implications, especially if it builds on other material already circulating in a target country’s information ecosystem. Indeed, analysts found that posts often sought to exploit pre-existing narratives and content from domestic social media or fringe websites in order to enhance engagement and local resonance. Examples included false information about Taiwan’s response to the pandemic, videos questioning the safety of the Pfizer-BioNTech vaccine, and the conspiracy theory that COVID-19 had been developed at a U.S. military facility. Others focused on the U.S. elections, and while they did not appear to promote a particular presidential candidate (instead criticizing both), they did amplify a questionable claim of election fraud, calls for violence, and other social discord in the United States both before and after the January 6 riot at the Capitol.
At present the networks are still reaching a relatively small audience and represent only one part of a much larger toolbox deployed by Beijing or its proxies to influence global information flows. Nevertheless, the underlying conclusions of these studies are worrisome. The number of people affected is growing, the accounts are breaking out of their own echo chambers to reach millions of global social media users on vital issues of public health and political participation, and the campaigns are clearly part of an organized, well-funded, and persistent effort almost certainly driven by some part of the Chinese party-state apparatus.
Whoever in the party-state apparatus is driving this effort is becoming more adept, and if there is one thing that the Chinese Communist Party has proven itself capable of time and again, it is innovation in the service of its own political survival.
[ad_2]
Source link