Facial recognition might be converting the security of our airports, workplaces, and homes, but in law enforcement, where one might expect to find the keenest take-up of this new and powerful technology, it has not been flat sailing. Allegations of inaccuracy, unlawfulness, favoritism, and ineffectiveness have become the norm. But the real problem is deployment all engines controlling the facial recognition solutions need to be trained for the real world. And, more outstandingly, the wrappers around the usage and systemizing of the technology need to be correctly architected.
Facial recognition was originally made for distinctiveness assurance and access control, to operate in controlled conditions and to confirm that a person is who they claim to be. Now cameras scrutinize crowds, comparing every passing face to a watch list. This is of itself is plagued with complexity. But when there is a match, what happens next? How can systems account for data quality, environmental conditions and scoring doorsteps? How can users move the focus from technology to results?
With group adoption now on the way, there might just be enough traction to determine these challenges. Police forces are actively testing facial acknowledgment, working out use cases. There is still much work to be done, but the answer to that deployment challenge could be, factually, right in front of them.
Last week’s news that the FBI has been testing Amazon’s facial acknowledgment was met with expected levels of dismay from a privacy hall that has decided facial recognition in law enforcement is a bad period. As it happens, the example cited by the FBI for where the technology could have been used was just about the least controversial one imaginable: sifting thousands of hours of recorded video for sightings of Vegas shooter, Stephen Paddock. We had representatives and analysts, eight per shift, working 24/7 for three weeks going through the video recording FBI Deputy Assistant Director Christine Halverson told an AWS meeting in November.
Although many law enforcement agencies make use of facial recognition to examine recorded video recording, saving time and attempt, it is much harder to use in real-time, in the real world. The mathematics of checking every onlooker in a crowded public space against even small watch lists pushes facial recognition to its limits. Only the very best systems can handle it. As a result, there is still extraordinarily little live facial recognition used in conventional policing. But that is about to change.
One force checking the options for real-time facial recognition is the Metropolitan Police in London. Commissioner Cressida Dick said last year that facial recognition is becoming better by the minute. I think the public would anticipate us to be thinking about how we can use this technology and seeing whether it is effective and proficient for us. But in December, when the Met Police parked an observation van in the city’s Soho area with a nest of roof-mounted cameras to check Christmas masses against a watch list of wanted criminals, it prompted a bare answer from privacy activists. The director of Big Brother Watch called it a dreadful waste of police time and public money and said that it is well past due that police drop this hazardous and lawless technology.
Recognizing the contentious features of the technology, the Met Police has made clear its eagerness to connect. Ivan Balhatchet, the Met’s tactical lead for live facial recognition, said in a statement in December that we continue to connect with many different stakeholders, some who actively confront our use of this technology so as to show clearness and continue positive debate, we have invited individuals and groups with unreliable views on our use of facial recognition technology to this deployment.
In distinction to facial recognition, body-worn cameras have already seen mass adoption. These body-worn video devices now accessorize police uniforms global, providing proof management, officer security, and public comfort. Body-worn cameras record video for offloading into physical or cloud-based storage systems. Body-worn cameras also live stream video back to control rooms. Others link to weapon holsters to automatically activate video recording. As mobile devices develop, the newer models of body cameras which are based on Smartphone platforms are set to become more commanding. This means the two technologies will meet. Facial recognition on a body-worn camera is an understandable next step. Allowing officers with watch lists of wanted criminals, persons of interest, missing children, susceptible adults… The list goes on.
In December, even as London argued its green van, a different test of facial recognition was happening in another world city many thousands of miles away. This test didn’t make headlines. Its features were undisclosed. But it is much more descriptive of how facial recognition will be deployed in forefront policing. As with London, the best went to city streets with a watch catalog of around two-thousand people.
But this test attentive to stop and hunt, with facial recognition on body camera, sooner than CCTV or surveillance vans. The test engaged some officers wearing devices that worked in real-time from the same watch list. Within the first hour, two arrests were made next thriving facial recognition matches from those body cameras. The suspects are now being maltreated.
Stop and search is of itself contentious. Such powers go to the core of policing by consent, raising questions about profiling and partiality and justification. Regardless of the criticism, it can be hugely efficient, taking weapons from the street and arresting those in ownership. Recognize confirmation is a key part of the method. Body Cameras with facial recognition make sure identities are checked exactly, all compliant with policy. Known lawbreakers, persons of interest-whether identified or not, susceptible minors and adults, all can be identified as such. As a help to intelligence-led frontline policing it presents serious benefits. The arrests referenced above were only made possible by such techniques.
The use of facial recognition on body cameras also offers defense against accusations of racial partiality. Policies can be set to shun officers searching those not identified by facial recognition, even when stopped. Where the indictment is made that stop and search over-polices low-level crime in definite communities, facial recognition on body cameras offers a balance. It is this kind of safeguarding that will facilitate prompt broader acceptance.
Facial recognition on body cameras will also give secondary confirmation for matches from surveillance vehicles and CCTV cameras. Following an early match, an officer on foot goes to the person and runs a second check from body cameras, running from the same watch list. Only if there is a match who is taken further. In of itself, this is very material protection against so-called bogus positives. It also gives a person to person communication before any concluding decision on an arrest is made.
Edge Artificial Intelligence:
The connection of multiple cameras to the same watch lists and each other opens up the broader benefits of edge-intelligence. Another benefit of this is the potential to use edge-intelligent body cameras to give an initial match, basically narrowing down the haystack, with the match then sent to a central cloud-based system using the similar AI engine, or a different AI engine, or even many AI engines, to give a much more correct filter before any ‘match’ is presented to an operator. All of that would happen in two seconds.
This should be the year when facial recognition shakes the allegations of inherent bias and inexactness. It should be the year when there is recognition that not all facial recognition technologies are alike. Different tools should be obtained for different purposes. And the sterile evaluation of such engines in controlled conditions should give way to actual customer references and proof of outcomes.
The first generation of body-worn cameras in operation today has focused on recording video footage for evidence management systems. Now, Body cameras 2.0 will move the focus to live video streaming, facial recognition, and on-device Edge-AI. This next generation of body-worn cameras will join the billions of other IoT mechanisms that will be arranged on 4G and 5G networks over the coming years. Ultimately, the junction will see the kinds of body cameras in use today morph into ruggedized, large-screen smart phones that will join a capture and streaming capability with edge AI analytics and rich data production to the frontline officer. The period of the single-purpose record-only camera is coming to an end.
For facial recognition, the arguments in 2019 should be around the completely unregulated uses of the ability for marketing and commercial security. AI in silicon implanted in cheap IP cameras, reachable to all. The use of this technology in law enforcement will stop to be quite so polarizing.
And so, as much as policing is concerned, 2019 will be spotted as a turning point for facial recognition. Tests will morph into deployments. Deployments will give way results. The disputes will be won. The majority of the public will choose personal security and safety over random privacy.