Facebook uses a combination of AI, human expertise and government partnerships to keep terrorist propaganda off its platform.
A blog post authored by Monika Bickert, Director of Global Policy Management, and Brian Fishman, Counterterrorism Policy Manager, details how Facebook uses artificial intelligence to keep terrorist content off Facebook, something the social media giant has not talked about publicly before.
The post comes in response to public concerns over the role of tech companies in fighting terrorism online following recent terror attacks. It’s the first in a series of posts asking users for feedback on how Facebook should handle ethical issues.
Facebook committed to making its platform a “hostile place for terrorists” and outlined the steps they had taken to do so. The authors cautioned there was not quick technological fix to terrorism, but given the scope of Facebook these steps could be significant.
The use of AI in the fight against terrorism is relatively new, but the authors said “it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook.”
Facebook’s AI uses image matching and language understanding across multiple platforms and applications to filter and remove terrorist content. When material is identified and removed, algorithms “fan out to try to identify related material that may also support terrorism.”
“We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account,” the authors write.
This takes advantage terrorists’ tendency to “cluster” by identifying suspiscious profiles, pages and posts.
However, the company concedes, “AI can’t catch everything. Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context.”
That’s why Facebook is also employing human expertise and strategic partnerships to combat terrorism.
“At Facebook, more than 150 people are exclusively or primarily focused on countering terrorism as their core responsibility. This includes academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers,” the authors said.
The social media giant is also tapping into perhaps its greatest resources, its two billion users.
Facebook users, “help us by reporting accounts or content that may violate our policies — including the small fraction that may be related to terrorism,” the authors said.
Terrorism content is not limited to Facebook and the company said they are forming crucial partnerships with other companies, civil society, researchers and governments.
Facebook has partnered with Microsoft, YouTube and Twitter to develop and shared industry database of “hashes” or digital fingerprints of terrorist content.
The authors acknowledged that encrypted messages were necessary for legitimate purposes and they are unable to read the contents of individual encrypted messages. “But we do provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies,” the authors wrote.