Facebook placed warning labels on 50 million pieces of content related to COVID-19 during April in an effort to alert users to false information on the platform. 

Since March 1, Facebook has removed more than 2.5 million pieces of content for the sale of masks, hand sanitizers, surface disinfecting wipes and COVID-19 test kits using a new computer vision tool. However an investigation by Buzzfeed News found some ads have been able to creep through, despite the ban. 

In a blog post published this week, the company acknowledged the tools were “far from perfect” and detailed how it is using AI to detect misinformation during the pandemic. 

This week Facebook also released its latest community standards report which provides insight into the scale of controversial content the company needs to clean up on its platforms. 

The report covers four key areas across Facebook and Instagram: hate speech, adult nudity and sexual activity, violent and graphic content, and bullying and harassment. 

In the first three months of the year, Facebook removed 1.9 billion posts that were categorised as spam.

Infographic: Toxic Content Runs Rampant on Facebook | Statista

The fifth edition of the report covers the period from October 2019 through March 2020, so it only records the very beginning of how the pandemic played out on Facebook’s platform. 

Facebook uses a combination of AI tools and human moderators to detect content that violates its policies. In a company blog post Guy Rosen, VP Integrity highlighted Facebook’s technology advancements and noted the company relied less on human labour during the pandemic. 

“When we temporarily sent our content reviewers home due to the COVID-19 pandemic, we increased our reliance on these automated systems and prioritised high-severity content for our teams to review in order to continue to keep our apps safe during this time,” Rosen wrote. 

Social distancing measures had an impact on the numbers which were reported. For example, areas like bullying and harassment, where an understanding of context is vital Facebook tend to rely more heavily on human review. For this reason, 2.3 million pieces of bullying content were actioned in Q1 2020, down from 2.8 million in Q4, due to a reduced and remote workforce in late March as a result of COVID-19.

On the other hand, technology has helped increase the number of other forms of offending content that is flagged. For example, action against hate speech increased from 5.7 million pieces of content in Q4 2019 to 9.6 million in Q1 2020 by expanding Facebook’s proactive detection technology to new languages, in addition to making improvements to its English detection technology.  

Over the last six months, we’ve started to use technology more to prioritise content for our teams to review based on factors like virality and severity among others. Going forward, we plan to leverage technology to also take action on content, including removing more posts automatically. This will enable our content reviewers to focus their time on other types of content where more nuance and context are needed to make a decision,” Rosen wrote. 

Earlier this week Facebook paid US$52 million (AU$80 million) to settle a class action against content moderators who suffered from PTSD caused from policing content on the platform.

Facebook also published its biannual Transparency Report which showed during the last six months of 2019, government requests for user data increased by 9.5 per cent from 128,617 to 140,875. Of the total volume, the US continues to submit the largest number of requests, followed by India, the UK, Germany and France.

LinkedIn
Previous post

Google faces fresh GDPR challenge over advertising tracking

Next post

The recipe for a seamless webinar and keeping your customers engaged

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.