Facial recognition technology is rapidly becoming integrated into our daily lives, from unlocking our smartphones to personalized ads.
But as it becomes increasingly ubiquitous, so too do debates over the ethical and legal boundaries of its use. Top technical reporter kashmir hills A New York Times reporter recently shed light on one particularly secretive company in this space: Clearview AI. Hill’s research suggests that stealth startups may be dramatically redefining privacy norms while staying under the radar.
We are on the brink of a world where facial recognition goes out of control and anonymity becomes impossible. I think it’s still possible to intervene before it becomes completely normal.
— Kashmir Hill (@kashhill) September 25, 2023
Amazing features of Clearview AI
Powerful facial recognition tools are at the core of Clearview AI’s offering. All you have to do is upload a personal photo, and the tool promises to show you everywhere that face appears online. This includes major platforms like Facebook, Instagram, and LinkedIn, as well as less obvious platforms like Venmo. It’s not just about viewing your profile. Photos you didn’t know were online could be exposed, invading your privacy on an unprecedented scale.
A controversial tool
While the benefits for law enforcement are clear, including solving crimes and finding suspects, the potential for abuse is disconcerting. Hill makes some dire assumptions. A woman coming out of a clinic is being identified and harassed by protesters, or a stranger in a bar is using the tool to mine a wealth of personal data about someone they just met. This lack of control over one’s image is alarming in many quarters.
Compounding the concern is the accuracy of the technology, or the potential lack thereof. There have been reported instances of misidentification, especially among people of color. Not only does this increase racial bias in police enforcement, but it can also have serious consequences for those who are wrongly accused.
From Google Labs to Clearview Products
Interestingly, Clearview AI is not the first to venture into this controversial area. Big tech companies like Google and Facebook have dabbled in similar technology before. However, aware of the potential pitfalls, they chose not to release that version to the public. This self-regulation makes Clearview AI’s progress even more remarkable.
Clearview AI doesn’t just create innovative technology. They also utilized unique data collection methods. They systematically collected billions of photos from the internet, pulling from many sources without explicit permission from the platforms or the individuals featured.
Clearview’s secret activities
Clearview AI remained a mystery for much of its early existence. Even locating an office proved difficult for Hill, as a non-existent building was found from the given address. Initial attempts to contact company representatives hit a wall. But as Hill dug deeper into her investigation, she discovered that Clearview was tracking her movements. Whenever law enforcement attempted to run her images through Clearview’s systems, the company was alerted and frequently intervened, flaunting its intrusive surveillance capabilities.
Demand for regulatory oversight grows stronger
The ready availability of such powerful technology and the demonstrated willingness to use it proactively has led to growing calls for regulatory oversight of Clearview AI. Some U.S. states, including California, Colorado, Virginia and Connecticut, have laws in place that allow residents to request that their data be removed from Clearview’s database.
However, more than these piecemeal regulations may be needed. As Clearview AI and companies like it continue to push the limits of facial recognition, a comprehensive national and even global approach may be essential to protecting public privacy.
Casimir Hill’s revelations about Clearview AI highlight the urgent need to publicly discuss the boundaries of facial recognition technology. As the lines between public and private become blurred, society must decide where the limits are and who can draw them.