(Thanks to Policing Insight for publishing this piece. It’s an expanded version of comments I made at a recent Westminster eForum event on biometrics and digital identity, which were also written up by Infosecurity Magazine and TechMonitor).
The rapid advance of biometric technology is generating new identification and surveillance opportunities for the police and security sectors, alongside growing concern about oversight. Jessica Figueras, Vice-Chair of the UK Cyber Security Council (writing here in a private capacity) believes that improved ethical governance will be crucial as the use of biometrics expands – and the lack of ‘algorithmic inspection’ skills required to provide that governance could slow the pace of adoption.
It’s often said that technology develops at a speed that outpaces society’s ability to manage it, and here is a classic case.
Today’s biometrics is a family of uniquely powerful identification techniques with obvious applications in cyber security, public safety, national security and criminal justice. But fingerprint evidence was first accepted in British courts 120 years ago, and we’ve known for many years that such technologies are neither foolproof nor ethically neutral.
UK citizens have accumulated some protections in areas previously considered highest risk, namely the police use of DNA and fingerprints. Automated facial recognition is now the focus of much current debate, and Professor Fraser Sampson, the new Biometrics and Surveillance Camera Commissioner for England and Wales, has recently put his concerns on the record.
Harms that may arise if the technology does not work as intended, including mistaken identity, are amplified when bias and discrimination is present. And as technologies mature, ethical concerns may shift to harms that arise when the technology works exactly as intended.
These tend to be civil liberties-focused; Professor Sampson pointed to the development of a ubiquitous layer of ‘omniveillance’, with police increasingly able to access data from private devices as well as publicly-owned data sources. The primary framework for governance of these issues is data protection law.
Appetite for oversight
So is further legislation a solution? Professor Sampson believes that Parliament should not ban the use of facial recognition by law enforcement, as has been proposed by the European Parliament, because police need some discretion in matching the technological capability available to criminals.
Policymakers in France, Germany, the Czech Republic, Hungary and Sweden have expressed similar viewpoints. There are often complex trade-offs between online privacy and security which are not always well-captured in public debate.
But there is appetite for some form of increased public oversight. Professor Sampson has emphasised that societal acceptance is vital, and that wide public consultation in England and Wales is needed.
The Scottish Government has already acted on biometrics in Scottish policing; its 2020 legislation to introduce a new Scottish Biometric Commissioner goes further than the equivalent powers in England and Wales.
It takes a systems approach by explicitly considering impact on equalities, and on vulnerable citizens, and requiring the Commissioner to work with a wide range of public bodies and stakeholders.
In the short to medium term it seems likely that any constraint on police use of biometrics will be driven by codes of practice, both mandatory and voluntary, and increasing use of the law by citizens and activist groups. As we saw from a recent successful civil claim over a neighbour’s smart doorbell, a legal challenge against South Wales Police, and another against Uber, citizens are becoming more assertive.
But very many use cases for biometrics are less visible than a camera, and the Post Office’s Horizon case showed how difficult it can be for individuals to claim redress when full legal disclosure and forensic computing is required to prove the facts. This imbalance of power can only exacerbate the atmosphere of distrust surrounding biometrics.
Ethical governance
Ethical codes of practice are vital, then, although implementing a code is not a straightforward mitigation for organisations that design or operate biometric systems.
Whatever intentions and principles have been espoused during the design process, it is impossible to say that a given system is delivering fair and reasonable outcomes without validation by independent technical auditors – what’s been dubbed algorithmic inspection. The complexity of modern data analytics techniques dictates that these skills are in short supply and expensive.
Given the speed of biometric adoption across public and private sector alike, significant new capacity would be required to provide all system designers with independent skilled oversight.
Yet there is no ready-made army of algorithmic inspectors on hand, no agreed framework for how inspection might work and, importantly, how it might be paid for. And of course, private actors may feel under little pressure to subject themselves to any independent scrutiny at all.
This lack of a practical ethical governance framework for biometrics seems likely to slow its rate of adoption in the most sensitive areas, perhaps even providing some room for society to catch up.