Three global tech giants — IBM, Amazon, and Microsoft — have all announced that they will no longer sell their face recognition technology to police in the USA, though each announcement comes with its own nuance.

The new policy comes in the midst of ongoing national demonstrations in the US about police brutality and more generally the subject of racial inequality in the country under the umbrella of the Black Lives Matter movement.

While the three companies have only a relatively small share of the law enforcement face recognition market in the country, their willingness to publicly disavow the segment has drawn significant attention to the issue.

And while they may only represent a small portion of supply in the market, all three have an outsized influence due to the scale of their lobbying activities in Washington and their capacity to influence regulation.

An image from IBM’s Diversity in Faces dataset.

In contrast, Clearview AI, one of the leaders in the sector, announced earlier this year that it would only sell its solution to law enforcement agencies. It is currently understood to have over 600 law enforcement clients in the US.

Clearview AI, along with its Australian founder Hoan Ton-That, is increasingly in the spotlight as the debate over racial and gender algorithmic bias heats up. The European Data Protection Board, a privacy group, warned earlier this month that Clearview AI was most likely illegal in the EU.

In response, CNBC quotes Hoan Ton-That as saying, “Clearview’s image-search technology is not currently available in the European Union. Nevertheless, Clearview AI processes data-access and data-deletion requests from EU residents. In fact, Clearview AI searches the public internet just like any other search engine.”

However, on that last point it is worth noting that it has also picked fights with Google, Twitter, YouTube, and Facebook, from where it has been scraping faces to fuel its algorithm. These companies have sent cease and desist notices, but Clearview AI argues the scraping is protected speech under the US First Amendment.

For its part, IBM sourced the million faces for its dataset from a Flickr database, causing a ruckus at the time as the subjects of those images had not given their explicit consent for them to be used, although there was a Creative Commons licence in place for the full 98 million image database.

Clearview AI is also working with law enforcement agencies in Australia — though remarkably, police denied this until a client list leaked, according to Jake Goldenfein, writing in The Conversation.

He wrote, “Australian police agencies initially denied they were using the service. The denial held until a list of Clearview AI’s customers was stolen and disseminated, revealing users from the Australian Federal Police as well as the state police in Queensland, Victoria and South Australia.”

Information Age, published by the Australian Computer Society, reported that “SAPOL (South Australian police) was blunt in denying its use of Clearview AI and did not provide further comment about using facial recognition technology.”

According to Information Age, “Both Queensland and Victorian Police, however, said they use facial recognition technology but refused to comment on ‘operational methodology’ or ‘the specifics of the technology’.”

IBM’s strike

The renewed focus on face recognition by law enforcement was brought into focus when IBM wrote to the US Congress about the Racial Justice Reform bill. IBM CEO Arvind Krishna said in the letter which is posted on IBM’s THINKPolicy Blog that his company was getting out of the face recognition game altogether.

Krishna writes, “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

He says AI is a powerful tool that can help law enforcement keep citizens safe but then cautions that “Vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.

“Finally, national policy also should encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques,” writes Krishna.

IBM was followed closely by Amazon, which has had problems with algorithmic bias in the past. Famously it was forced to scrap a recruiting tool that discriminated against women candidates.

Microsoft quickly followed suit, although it’s commitment was more nuanced.

Neither Amazon nor Microsoft is getting out of the face recognition market altogether. Rather they are taking a more limited approach to law enforcement. Amazon says it won’t sell to the police for the next year, while Microsoft says it will decline sales for police until there is a federal law in place regulating its use.

And both have a vested interest in influencing any potential regulation and bending it in their favour should they decide to hop back into the market.

Bias beyond the platform

The data which feeds algorithms, and which fuels machine learning, is also problematic, say practitioners.

According to Habibullah Khan, the founder of penumbra digital, “It is impossible to get a neural network without racial bias because the training data you feed to fuel it is always affected by the personal bias of people who collected data.”

He also calls out the impact of environmental bias. “This often has to do with simply not enough data of a certain type because you culturally were influenced to think it was not needed.”

“Personal happens all the time but environmental is the reason that all large data sets get corrupted. This is why when someone found black people being tagged as gorillas, Google still has not fixed it. That was in 2015!”

Khan also shared an example of an image database that identified a handheld thermometer as a firearm when held by a black person, and an electronic instrument when held by an Asian person.

“All large open data sets that you train algorithms on suffer from these biases and I do not think it can be fixed. Because image databases are reflective of society and just like society is finding it hard to fix racism because it is ingrained, image databases are exactly the same.”

Khan told Which-50, “Our only hope is to build a special ‘diversity’ database from scratch and have companies and data scientists use that. It will be a large expensive effort requiring serious technical and financial muscle but it is the only way.”

For this reason, he said he took the announcements from the major tech companies on face recognition with a pinch of salt.

“A real commitment to racial equality would be coming together to build this open image diversity database.”

LinkedIn
Previous post

Column - A resource guide into the current state of adtech

Next post

SAS commits to Azure as its preferred cloud platform