A Reckoning with Facial Recognition Technology and Responsibility

Several major players in the facial recognition market – IBM, Amazon, and Microsoft – have halted all sales of facial recognition technology (FRT) to police departments in the United States. Each of these companies made a statement regarding technology’s relationship to public safety.

IBM CEO Arvind Kirshna sent an open letter to several US Senators and House Representatives, stating “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology…for mass surveillance, racial profiling, violations of basic human rights and freedoms…”

Amazon’s corporate blog states that “We’re implementing a one-year moratorium on police use of Amazon’s facial recognition technology.”

Microsoft President Brad Smith said, “We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”

Separating Use of Facial Recognition Technology for Law Enforcement from Other Use Cases

Each of these statements are backed by a longer and more nuanced explanation by each company. But this leaves many questioning the biometric authentication methods that they use to unlock their phones, or the detection technology on many social media platforms that recognizes faces in photos. Users should remember “facial recognition” is a generic term that is often used to describe dissimilar technologies. These different technologies can be more accurately described as separate categories of facial recognition technology:

Facial Matching is the technology that IBM, Amazon, and Microsoft are most likely referring to, as it has the most applicability to surveillance and identifying a previously unknown suspect. Facial Matching is a one-to-many approach that allows the user of a matching system to input a photo or video footage and discover the identity of someone previously unknown by finding a match in a database of many faces. Facial Authentication requires positive identification of an individual, meaning that the user already knows who the input photo or footage belongs to, but must show proof that the input data is who the user says it is. This one-to-one authentication has vastly different implications for law enforcement and the potential rise of mass surveillance for public safety.

Potential Upcoming Legislation on Facial Recognition

Legislation for facial recognition technology is in the pipeline, both in the EU and US. In February 2020, the Assembly Bill 2261 was introduced in California that would add a section relevant to facial recognition to the CCPA. The general summary of this bill that individuals must give consent to having their facial data collected, and once that data is collected then individuals must have protected rights to knowledge, erasure, etc. This operates on the assumption that collection of data is acceptable or inevitable. The bill prescribes that data processors must make available the technical capacity to independently test for bias and unfair performance differences of facial recognition technology. Data controllers must provide “conspicuous and contextually appropriate notice” anytime a facial recognition service is used in a public space, and must include the purpose it is used for. Consent from an individual to enroll their face in a facial recognition service is required for all purposes except security or safety purposes. Given the current public sentiment towards police, is possible that the bill will not pass into law.

Current EU data protection rules and the Law Enforcement Directive do not permit biometric data, including facial images, to be processed for identifying an individual in the one-to-many approach, except for reasons of substantial public interest. The Commission is still discussing the intersection of AI and facial recognition technology with the appropriate use of biometric data, and has no official statements as of yet. Since 2017, police departments in Germany, France, and the UK have piloted facial recognition in the field and with volunteers. It was determined that in France and Germany, the technologies piloted could not be legally used outside of a test setting.

Impacts for Business

The sweeping statements made by industry leaders pose a risk that facial recognition technologies will lose public favor. The distinction between authentication, detection, and identification services is not yet clear in the public sphere, and the public sector is not moving quickly enough to address general concerns of safety and human rights violations. It is the role of the public sector to initiate legislation, but it still relies on private companies to develop or procure technologies. As usual, it is the private sector that drives progress on facial recognition by continuing to bring solutions to the market, and should strive to protect fundamental human rights when doing so.


Related Events

cybernetix.world 2020

Virtual Event

cybernetix.world 2020


KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Subscribe to our Podcasts

KuppingerCole Podcasts - watch or listen anywhere


How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00