Youtube has announced new procedures to tackle problematic content and improve brand safety. The platform will introduce more human oversight and moderation of videos, and Bloomberg is reporting, it includes a new method of vetting premium content for advertisers.
More human and artificial intelligence monitoring will be applied to the existing ‘Google Preferred’ category — a tier of premium channels offered to advertisers at a higher price — to improve the detection of content that is inappropriate for advertising.
The improvements aim to give advertisers “assurances for what they buy” ahead of time, according to a company spokesperson in an email sent to partners.
In a seperate company blog post Youtube CEO, Susan Wojcicki, outlined a renewed approach to brand safety.
“We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand’s values,” she wrote in December.
“We believe this requires a new approach to advertising on YouTube, carefully considering which channels and videos are eligible for advertising. We are planning to apply stricter criteria, conduct more manual curation, while also significantly ramping up our team of ad reviewers to ensure ads are only running where they should.”
Youtube plans to increase the number of people used to detect problematic content to over 10,000 people in 2018.
The right thing to do
According to Wojcicki, the measures are being undertaken “because it’s the right thing to do”. It may also have something to do with what has been a challenging year for Youtube and brand safety, and recent controversy surrounding one of the platforms biggest stars.
In March last year Youtube came under fire from brands after their advertisements were placed alongside extremist content, which meant they were helping fund the content creators and unwittingly lending them credence. It triggered a boycott from several brands and may have cost Google $750m. Google issued a public apology.
The latest incident involved prominent and polarising Youtube vlogger, Logan Paul posting a video which included footage of an apparent suicide victim in Japan.
The video was reportedly initially okayed by Youtube’s content assessment team before it was removed. But not before it had been seen by six million people and appeared on Youtube’s trending list.
Paul’s videos had been part of the Google Preferred program for advertisers and he was part of Youtube’s Red subscription service. But following public outcry, Youtube removed Paul’s videos from Google Preferred and placed Paul’s subscription projects “on hold”.
Youtube posted a public apology on Twitter and conceded it had “taken us a long time to respond”.