Anti-Terrorism Database Tries to Stop Spread of Terrorist Content
1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Anti-Terrorism Database Tries to Stop Spread of Terrorist Content

Should we be concerned about a new anti-terrorism database?

Earlier this week, Facebook, Microsoft, Twitter, and Youtube announced they have started a new initiative to prevent the spread of terrorist and extremist content on their sites and social networks. The companies have created a shared database that they will use to report content that “promotes terrorism,” allowing for them to work together in removing such content from their websites.

The database will work by storing hashes of content that each site deems to be terroristic. Hashes, which are the result of a mathematical algorithm, uniquely identify digital files.

Once the hashes are in the database, other participating companies can use them to scan content their users are posting, then review and remove it if that same content is being posted to their site. This allows detection to become a group effort.

We know terrorist groups have been using social media as an extremely effective tool for spreading propaganda and evidence of their exploits. This new database hopes to put an end to the distribution of this content, which is not only a tool for radicalization, but very upsetting to someone who may see it by accident.

However, the database also poses privacy concerns about data sharing between major companies and concerns that “terrorism” may have a muddy definition.

Many of the major sites driven by user submissions use a similar system to automatically identify and remove child pornography. A system named PhotoDNA was specifically developed for this purpose, and has been adopted by the National Center for Missing and Exploited Children.

PhotoDNA, and other similar systems, are widely used and accepted as effective. The same companies who have formed this anti-terrorism database – Facebook, Twitter, Youtube, and Microsoft – already use PhotoDNA or an analogous system. Though some may not be aware of how far the detection measures spread. Google’s proprietary system scans every image sent via Gmail, which many users consider private.

The difference between PhotoDNA and this new anti-terrorism database is oversight. Hany Farid, one of the creators of PhotoDNA, said to The Guardian:

“There needs to be complete transparency over how material makes it into this [anti-terrorism] hashing database and you want people who have expertise in extremist content making sure it’s up to date. Otherwise, you are relying solely on the individual technology companies to do that.”

Hashes put into the PhotoDNA system “are categorized centrally by law enforcement,” but no similar independent body exists for this anti-terrorism database. Glyn Moody, a contributing editor at Ars Technica, wrote:

“There is an important difference between the two situations. Whereas child sex abuse is unambiguously illegal, and relatively clear-cut in its definition, it is much harder defining what exactly constitutes ‘violent terrorist imagery or terrorist recruitment videos or images.’”

In the statement, the four companies wrote that “each company will independently determine what image and video hashes to contribute to the shared database,” and “No personally identifiable information will be shared, and matching content will not be automatically removed.”

That suggests that they are well aware of the concerns of censorship and over-reach. However, there is still nothing stopping said over-reach. Highly-politicized movements like Black Lives Matter have been described as “domestic terrorism” by some, raising the question if similar protests and social movements may be labeled “terrorism.”

This does not have to be the intent of the system – all it takes is for a reviewer to make a questionable call.

Highly-politicized movements like Black Lives Matter have been described as “domestic terrorism” by some, raising the question if similar protests and social movements may be labeled “terrorism.”

There is reason to call these companies moderation abilities into question. Last month, Facebook decided to automate its news feed, firing its human editing team for an algorithm. Almost immediately, the algorithm failed, allowing fake news to appear in the “Trending” section prominently shown to all Facebook users. Twitter has also been heavily criticized for its failure to prevent abuse and harassment.

Mr. Moody suggests that this initiative may be in response to pressure from European politicians. He notes that days before this announcement “the EU’s justice commissioner Vera Jourova said that the four were not doing enough to comply with the code, and she threatened to bring in new Europe-wide laws to address the problem unless they and other online services tried harder”

Is this more of an empty gesture, than a serious system? Likely only time will tell, but the sharing of data and lack of transparency sets a bad precedent either way.