YouTube to expand teams reviewing extremist content
Alphabet's YouTube said on Monday it plans to add more people next year to identify inappropriate content, as the company responds to criticism over extremist, violent and disturbing videos and comments.
YouTube has developed automated software to identify videos linked to extremism and now is aiming to do the same with clips that portray hate speech or are unsuitable for children. Uploaders whose videos are flagged by the software may be ineligible for generating ad revenue.
But amid stepped up enforcement, the company has received complaints from video uploaders that the software is error-prone.
Adding to the thousands of existing content, reviewers will give YouTube more data to supply and possibly improve its machine learning software.
The goal is to bring the total number of people across Google working to address content that might violate its policies to over 10 000 in 2018, YouTube CEO Susan Wojcicki said in one of a pair of blog posts on Monday.
"We need an approach that does a better job determining which channels and videos should be eligible for advertising," she said. "We've heard loud and clear from creators that we have to be more accurate when it comes to reviewing content, so we don't demonetise videos by mistake."
In addition, Wojcicki said the company would take "aggressive action on comments, launching new comment moderation tools and, in some cases, shutting down comments altogether".
The moves come as advertisers, regulators and advocacy groups express ongoing concern over whether YouTube's policing of its service is sufficient.
YouTube is reviewing its advertising offerings as part of its response, and it teased that its next efforts could be further changing requirements to share in ad revenue.
YouTube this year updated its recommendation feature to spotlight videos users are likely to find the most gratifying, brushing aside concerns that such an approach can trap people in bubbles of misinformation and like-minded opinions.
Tech consortium flags videos, images
Meanwhile, a consortium of tech companies including Facebook, Alphabet's Google and Twitter said on Monday a database it created to identify extremist content now contains more than 40 000 videos or images.
The Global Internet Forum to Counter Terrorism (GIFCT) was created in June under pressure from governments in Europe and the US after a spate of deadly attacks.
The forum has recently added Ask.fm, Cloudinary, Instagram, Justpaste.it, LinkedIn, Oath and Snapchat owner Snap.
The group shares technical solutions for removing terrorist content, commissions research to inform their counter-speech efforts, and works more with counter-terrorism experts.
The database helps companies in identifying and removing matching content that violates their respective policies, or even block extremist content before it is posted.
The forum now has 68 companies as members, exceeding its initial goal of 50 companies for 2017, the consortium said in a joint statement.