Campaigners have warned that new guidance from the College of Policing on the use of facial recognition technology means victims of crimes and potential witnesses could be placed on police watchlists.
The college said the new advice for forces in England and Wales will make sure use of facial recognition technology is “legal and ethical”, but civil liberty groups branded it as an “atrocious policy and a hammer blow for privacy and liberty”.
They claim it could mean people with mental health problems are placed on a list if sought by police.
According to the guidance, the technology can be used in police operations to find “people who are missing and potentially at a risk of harm; find people where intelligence suggests that they may pose a threat to themselves or others; and arrest people who are wanted by police or courts”, including terrorists and stalkers whom officers have intelligence on.
It also says that “images that may be deemed appropriate” for inclusion on any watchlist include “a victim of an offence or a person who the police have reasonable grounds to suspect would have information of importance and relevance to progress an investigation, or who is otherwise a close associate of an individual”.
The guidance was issued after the Court of Appeal ruled in 2020 that the use of facial recognition cameras by South Wales Police as part of a pilot scheme breached privacy rights and broke equalities law.
‘Orwellian surveillance technology’
Silkie Carlo, director of the civil liberties and privacy campaigning organisation Big Brother Watch, said the group had “warned about mission creep with this Orwellian surveillance technology and now we see that this new policy specifically allows innocent people to be put on facial recognition watchlists”.
“This includes victims, potential witnesses, people with mental health problems, or possible friends of any of those people. It is an atrocious policy and a hammer blow to privacy and liberty in our country,” she said.
“Parliament has never debated facial recognition or passed a law allowing it to be used. The public wants police to catch criminals but no one wants dangerously inaccurate tech turning our streets into police line-ups.”
She added that the government “should ban live facial recognition until it has properly considered the extraordinary risks it poses to rights and freedoms in Britain”.
Technology changes ‘the way we move through public spaces’
According to the College of Policing, live facial recognition “turns a digital image into a numerical value before comparing it with images on a police database” which should be chosen each time the technology is used.
It has so far been trialled by a small number of police forces to “detect crime and keep people safe” but critics have privacy concerns over its wider use.
Emmanuelle Andrews, policy and campaigns manager at Liberty, a campaigning organisation, said the guidance “does not solve the underlying problem that facial recognition technology does not make people safer”.
She said it “collect sensitive biometric data from everyone that passes through the camera, fundamentally changing the way we move through public spaces”.
“The safest, and only, thing to do with facial recognition is to ban it.”
According to David Tucker, head of crime at the College of Policing, facial recognition “will help police catch some of the most dangerous offenders including stalkers, terrorists and others that the public want off our streets”.
He said: “It will be used overtly and unless a critical threat is declared, the public should be notified in advance on force websites or social media about its use.
“We hope that those with concerns about this technology will be reassured by the careful safeguards we’ve set out as requirements for the police who wish to use it, based on a consistent and clear legal and ethical framework across all police forces.”