Facial recognition as a surveillance tool and privacy threat

Anton P. | May 5, 2021

Facial recognition is a powerful technology signaling the end of privacy as we know it. Imagine leaving your home and having a dozen cameras take your picture. It can be worse: facial recognition software might identify you as a suspect or a wanted criminal.

Overuse of facial recognition, especially at this stage, is not only intrusive. Criminal investigations could start heavily relying on face algorithms on public security cameras. In many cases, it might lead to false positives, inaccuracies, and even wrongful arrests. Despite concerns and potential for error, facial recognition is something you might already use. What matters here are the regulations and boundaries put in place to control the use of this technology.

Facial recognition: how does it work?

Facial recognition is a technology for verifying or identifying people’s identities according to their distinct facial features. This method turns your face into data, such as the distance between the eyes and the shape of your nose. Such information belongs to biometrics, personal data deriving from processing physical, behavioral, or psychological attributes.

Thus, your face becomes a mathematical representation comparable to other gathered records. Sometimes, specialists refer to it as the face template.

Its accuracy is also not immune to various external factors. Facial recognition systems might differ in their capacity to process blurry, low-quality images or different angles of view. For example, many current services using facial recognition accept people’s pictures taken in precise positions.

Where is facial recognition used?

Facial recognition software is one of the most promising surveillance technologies ever made. Its potential is enormous, but regular users might have had only trivial encounters with it. A study in 2019 observed the integration of facial recognition into many sectors:

  • Tech giants preparing to sell their facial recognition software to law enforcement. One of the most controversial examples refers to Amazon Rekognition software. The latter was capable of real-time analysis of video streams and face-based verification. In 2020, Amazon faced the controversy head-on and finally released a one-year moratorium on allowing law enforcement to use their platform. Nevertheless, several police departments in the US (one in Oregon and one in Florida) had actively used it. However, the dispute around facial recognition had been the loudest in the case of Clearview AI. Specialists have criticized the company for scraping data from social media sites and building a more than 8 billion-photo database.
  • Banks use facial recognition for authentication. Many banks might attempt to dip their toes into face recognition technology. After all, it is one of the sectors mitigating a high degree of fraudulent activities. Banks like Chase, HSBC, and USAA already allow clients to log in to their mobile banking apps with their faces. Many financial institutions across the world might also use this technology to secure their operations. CaixaBank, a Spanish multinational financial services company, has introduced ATMs with facial recognition technology.
  • Cosmetic companies allow clients to try their products virtually. Many beauty brands experiment with facial recognition for a more engaging customer experience. For instance, Covergirl uses it for their Custom Blend App, helping clients find the right foundation shade. In physical MAC stores, customers can also try on different beauty products via in-store augmented reality mirrors.

These are only a few examples of facial recognition technology making its way into our society. Healthcare firms use face recognition for fighting fraudulent insurance claims and improving patient care. Some insurance companies also explore the capabilities of this technology to price accurate insurance premiums. Finally, a number of airlines employ facial recognition to boost the security at their airports.

Main issues and pitfalls of facial recognition

A growing number of US cities choose to pass laws restricting government use of facial recognition. These include Mississippi, Boston, and Jackson. However, some cities have taken a step further. Oregon and Portland have prevented private businesses from installing this technology.

The European Data Protection Supervisor (EDPS) has recently expressed support for a ban on face recognition in the EU. The announcement came after the European Commission issued draft guidelines for permitting face recognition in locating missing children, terrorists, or criminals. According to EDPS, the use of facial recognition is intrusive and should face heavy regulations.

Well-grounded concerns about adopting facial recognition relate not only to the intrusion into individuals’ private lives. Other problems also typically accompany this technology.

Deepfake videos and images

Deepfakes are images, videos, or audio recordings that replace the likeness of one individual with another. The problem is that facial recognition systems might not be able to distinguish real media from deepfakes.

Thus, criminals or fraudsters could use falsified content to trick face recognition into granting access. In 2019, a deepfake audio tricked a UK-based energy firm into executing a fraudulent bank transfer. Hence, deepfakes are already on criminals’ radar.

Misidentification

Facial recognition software is in no way perfect. Researchers have described it as error-prone and biased. According to EFF, face recognition is very likely to misidentify ethnic minorities, young people, African Americans, and women. Thus, the systems could generate a lot of false negatives or positives.

In both cases, software either fails to detect a face as a match or finds a wrong one. A man in Michigan is one of the first victims of false positives. The Detroit police wrongfully arrested him after using facial recognition to identify him as a suspected shoplifter.

Improper data acquisition and database creation

Entities working with facial recognition need extensive databases. In many cases, they might scrape data from social media sites or other platforms containing dozens of images. This questionable data harvesting has already happened. Clearview AI did take millions of photos from social media to create its database. Thus, without having explicit consent to use them, companies like it might face legal consequences.

Misuse of biometric data and facial recognition tech

One of the biggest red flags for citizens and specialists alike is the potential abuse. Face recognition is capable of supporting heavily restrictive mass surveillance. If not appropriately regulated, it could mean that many of the public surveillance cameras could identify us. As a result, citizens would lose the anonymity and privacy they have in the physical world.

In China, facial recognition is already a central part of controlling citizens. The country operates thousands of cameras across the country. A database leak in 2019 revealed just how widespread the tracking is, containing 6.8 million records taken on a single day.

What comes next?

The use of facial recognition needs to face strict regulations and boundaries. While it is an innovative technology improving business operations and consumer experiences, it is just as controversial. Companies using it need to ensure the safety of biometric data, as data breaches are a cause of concern.

Additionally, law enforcement agencies and police departments should reconsider the use of this tech. Many specialists highlight that the current state of face recognition software is not potent enough to contribute to criminal investigations.

All in all, it is essential to ensure that facial recognition would not help normalize this type of surveillance.

Anton P.

Anton P.

Former chef and the head of Atlas VPN blog team. He's an experienced cybersecurity expert with a background of technical content writing.

Tags:

biometricsclearview aideepfakes