[ad_1]
Over 80% of the 3.3 billion pieces of content removed across key technology platforms participating in a Global Alliance for Responsible Media (GARM) report are from three categories – spam, adult & explicit content, and hate speech & acts of aggression. This was revealed in GARM’s first report tracking performance on brand safety across seven platforms, including Facebook, Instagram, Twitter and YouTube, as the next step in its mission to improve the safety, trustworthiness and sustainability of media.
GARM is a cross-industry initiative founded and led by the World Federation of Advertisers (WFA) and supported by other trade bodies, including the Association of National Advertisers (ANA), Incorporated Society of British Advertisers (ISBA) and the American Association of Advertising Agencies (4A’s). According to a statement, by aggregating existing platform transparency reports and adding in policy-level granularity, the new document creates a common framework that enables advertisers to assess progress against brand safety for each platform member of GARM.
The GARM Aggregated Measurement Report is based around four key questions marketers can use to assess progress over time. The report is consistent with the common framework used to define harmful content not suitable for advertising and introduces aggregated reporting. “There’s no place for harmful online content in media that’s monetised by advertising, and we need to understand the size of the problem and track progress over time,” said Marc Pritchard, P&G chief brand officer. “The GARM Aggregated Measurement report is an important step forward in helping brands advertise in safe and suitable places—a critical element for consumer trust.”
The report follows nine months of collaborative workshops between advertisers, agencies and key global platforms working together as one of GARM’s Working Groups, bringing together for the first-time data in a single, agreed location around four core questions and eight authorised metrics that have been agreed as critical to tracking progress on brand safety.
This report includes self-reported data from Facebook, Instagram, Pinterest, Snap, TikTok, Twitter and YouTube. Numbers are self-reported by platforms. Twitch, which joined GARM in March, will join the reporting process for the next report, due later this year.
GARM platforms have reported increases in activity and its impact with significant progress by YouTube in the number of account removals, Facebook in the reduction of prevalence, and Twitter in the removal of pieces of content. These initial improvements have occurred amid an increased reliance on automated content moderation to help manage blocking and reinstatements due to COVID-19 disruptions that resulted in moderation teams working with limited capacity.
“This report establishes common and collective benchmarks that reinforce our goals and help brand leaders, organizations and agencies make sure we keep media environments safe and secure,” says Raja Rajamannar, chief marketing and communications officer, Mastercard and WFA President.
[ad_2]
Source link