Clearview AI - the bygone era of privacy

Anton P. | February 21, 2020

Facial recognition isn’t new. But over the last few weeks, fears about image recognition and privacy surveillance intensified, thanks to a startup called Clearview AI. A little-known company owns a database that contains over 3 billion images scraped from all corners of the internet. Law enforcement is using it, yet, nobody’s stopping this. 

The power of image recognition

Clearview AI, already titled as the scariest facial recognition software, might end privacy as we know it, as the New York Times points out. The mysterious startup created a technology that readily identifies everyone based on a face. With the quick snap of a single photo, Clearview AI software expose other public images of that person along with links to where those pictures appear. From social media profiles to e-wallets - the tool made it invasively easy to know a stranger’s name or even home address.

The database, filled with over 3 billion photos, contains images scraped from sites like Facebook, Instagram, Twitter, Youtube, and even Venmo. Until now, having such a system, whose backbone is a database of 3 billion images, has been a radical taboo due to privacy invasion concerns. Even companies that are capable of having such technology refrain from doing so.

Scary but effective?

Since 2019, more than 600 law enforcement agencies in the US and Canada are using Clearview AI. Raging from the Department of Homeland Security and FBI to local police departments, law enforcers have access to the technology that helps capture criminals and solve long-dormant cases faster. Hoan Ton-That, the CEO of Clearview AI, confirms that it’s not only about law enforcement, however. The tool is also licensed to at least a handful of companies for security purposes. According to Ton-That’s recent interview on Buzzfeed, the company already plans to sell the technology to governments in 22 countries.

Nowhere near finished

Clearview’s computer code, analyzed by The New York Times, contains a programming language fitting to augmented-reality glasses. This moment proves that in the future, individuals could identify every person they see in real-time. Law enforcement agencies and Clearview’s investors do believe that the application will eventually be available to the public. However, the company avoids debating the weaponization possibilities and shrouds itself in secrecy.

Fights back

Facebook, LinkedIn, Twitter, Google, Venmo, and Youtube already accused Clearview of violating their policies in cease and desist letters. Scraping and storing images this way is against most of the social media platforms’ policies. Twitter says that it explicitly bans such use of users’ data. Facebook’s spokesperson also claims that scraping people’s information infringes their terms of use, and they will take appropriate actions to stop it.

Yet, it’s not clear if terms of service would be sufficient to win against the Clearview in court. As Wired notes, there’s no federal law that explicitly protects user data, and only a handful of state acts and regulations can defend privacy rights. This raises a concern, whether companies like Facebook shouldn’t turn to build technical barriers that would make scraping hard.

Only the Government can stop it

A few cities already banned law enforcement agencies from using facial recognition. But if ever there was a stirring call for Congress to set tight restrictions on this technology, it’s right now. Still, forcing satisfactory uses of data collection is complex, and the legislation is likely to remain lagging behind technological existence. Even if Clearview doesn’t make its tool available publicly, another copycat might, as the cat is out of the bag. Now would be the best time to set your social media accounts to private.

Anton P.

Anton P.

Former chef and the head of Atlas VPN blog team. He's an experienced cybersecurity expert with a background of technical content writing.



© 2023 Atlas VPN. All rights reserved.