[ad_1]
A vast, globalized industry of low-cost social media manipulation service providers continues to flourish, distorting both commerce and politics — including the verified social media accounts of two U.S. senators
BRUSSELS — The conversation taking place on the verified social media accounts of two U.S. senators remained vulnerable to manipulation, even amid heightened scrutiny in the run up to the U.S. presidential election, an investigation by the NATO Strategic Communications Centre of Excellence found.
Researchers from the center, a NATO-accredited research group based in Riga, Latvia, paid three Russian companies 300 euros ($368) to buy 337,768 fake likes, views and shares of posts on Facebook, Instagram, Twitter, YouTube and TikTok, including content from verified accounts of Sens. Chuck Grassley and Chris Murphy.
Grassley’s office confirmed that the Republican from Iowa participated in the experiment. Murphy, a Connecticut Democrat, said in a statement that he agreed to participate because it’s important to understand how vulnerable even verified accounts are.
“We’ve seen how easy it is for foreign adversaries to use social media as a tool to manipulate election campaigns and stoke political unrest,” Murphy said. “It’s clear that social media companies are not doing enough to combat misinformation and paid manipulation on their own platforms and more needs to be done to prevent abuse.”
In an age when much public debate has moved online, widespread social media manipulation not only distorts commercial markets, it is also a threat to national security, NATO StratCom director Janis Sarts told The Associated Press.
“These kinds of inauthentic accounts are being hired to trick the algorithm into thinking this is very popular information and thus make divisive things seem more popular and get them to more people. That in turn deepens divisions and thus weakens us as a society,” he explained.
More than 98% of the fake engagements remained active after four weeks, researchers found, and 97% of the accounts they reported for inauthentic activity were still active five days later.
NATO StratCom did a similar exercise in 2019 with the accounts of European officials. They found that Twitter is now taking down inauthentic content faster and Facebook has made it harder to create fake accounts, pushing manipulators to use real people instead of bots, which is more costly and less scalable.
“We’ve spent years strengthening our detection systems against fake engagement with a focus on stopping the accounts that have the potential to cause the most harm,” a Facebook company spokesperson said in an email.
But YouTube and Facebook-owned Instagram remain vulnerable, researchers said, and TikTok appeared “defenseless.”
“The level of resources they spend matters a lot to how vulnerable they are,” said Sebastian Bay, the lead author of the report. “It means you are unequally protected across social media platforms. It makes the case for regulation stronger. It’s as if you had cars with and without seatbelts.”
Researchers said that for the purposes of this experiment they promoted apolitical content, including pictures of dogs and food, to avoid actual impact during the U.S. election season.
Ben Scott, executive director of Reset.tech, a London-based initiative that works to combat digital threats to democracy, said the investigation showed how easy it is to manipulate political communication and how little platforms have done to fix long-standing problems.
“What’s most galling is the simplicity of manipulation,” he said. “Basic democratic principles of how societies make decisions get corrupted if you have organized manipulation that is this widespread and this easy to do.”
Twitter said it proactively tackles platform manipulation and works to mitigate it at scale.
“This is an evolving challenge and this study reflects the immense effort that Twitter has made to improve the health of the public conversation,” Yoel Roth, Twitter’s head of site integrity, said in an email.
YouTube said it has put in place safeguards to root out inauthentic activity on its site, and noted that more than 2 million videos were removed from the site in the third quarter of 2020 for violating its spam policies.
“We’ll continue to deal with attempts to abuse our systems and share relevant information with industry partners,” the company said in a statement.
TikTok said it has zero tolerance toward inauthentic behavior on its platform and that it removes content or accounts that promote spam or fake engagement, impersonation or misleading information that may cause harm.
“We’re also investing in third-party testing, automated technology, and comprehensive policies to get ahead of the ever-evolving tactics of people and organizations who aim to mislead others,” a company spokesperson said in an email.
———
Associated Press writer David Klepper in Providence, Rhode Island, contributed to this report.
[ad_2]
Source link