Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It
Magazine / Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It

Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It

Book Bites Entrepreneurship Technology
Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It

Kashmir Hill is a technology and privacy reporter at the New York Times. Hill has worked and written for a number of publications, including The New Yorker, The Washington Post, Gizmodo, Popular Science, Forbes, and others.

Below, Kashmir shares 5 key insights from her new book, Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It. Listen to the audio version—read by Kashmir herself—in the Next Big Idea App.

Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It By Kashmir Hill Next Big idea Club

1. This was not a technological breakthrough.

When I first heard about Clearview AI, I had just become a reporter at the New York Times, and got a tip from somebody who discovered marketing material for Clearview in a public records request. Essentially, that request said that Clearview had scraped billions of photos from the public web, social media sites, and education sites and, with these photos, built a facial recognition app that could identify a person with 99 percent accuracy. It could also bring up all the photos of that person online. My colleagues and I were stunned. Legal experts that I called had never heard of anything like this before and were astonished by what Clearview AI had done.

The tools to do this had been around for a while, essentially sitting out there in the open online. What the small team of misfits at Clearview AI had done was not a technological breakthrough, but rather ethical arbitrage. They were willing to do what others weren’t. They were willing to gather these photos and make a radical tool that other companies had deemed taboo. In the classic regulatory-entrepreneur style of Uber and Airbnb, they did something that was in a legal gray zone and made it their own.

2. Google and Facebook actually protected our privacy here—in a way.

Google and Facebook aren’t necessarily known as privacy protectors. Most people associate them with changing privacy as we know it, as they have created new ways to discern information about each other and to post information about ourselves. But both companies had actually already developed Clearview AI-like tools internally years ago, with the ability to take a photo of somebody and find out their name or find other photos of them. Both companies, however, held that technology back.

Actually, a decade ago in 2011, Eric Schmidt said that this kind of face recognition was the one technology that Google had developed and ultimately deemed too dangerous to release. These two technology giants both decided they shouldn’t release the facial recognition technology and actually bought up small startups doing the same thing. Thus, this superpower was prevented from coming out earlier.

“Both companies, however, held that technology back.”

These days, the tools have been open-sourced thanks to an AI-neural-network technology revolution, and facial recognition is now something that anyone can potentially build.

Facebook’s culpability, however, does lie in the fact that many of its photos have been scraped by companies such as Clearview AI. Therefore, a photo of somebody can now directly link them to their Instagram account. Facebook could have possibly done more to prevent that kind of scraping.

3. The world of personal information is about to reorganize itself around our biometric information.

For the last few decades, we have been building these online dossiers, posting lots of information about ourselves, and having other people collect information about us through our clicks and our streams. Everything that we’ve been doing has been gathered online. Right now, it’s associated with maybe your name, maybe a cookie on your computer, or maybe some data broker’s file.

But increasingly, this online dossier is going to be directly linked to us via our face. It is going to be trivial to identify somebody as they walk through the world. This whole online dossier will include who our friends are, where we live, our credit rating, and what we are willing to spend in a given year. That is going to change what it means to navigate the public world.

Artificial intelligence is growing more powerful, so we will likely also see this happen with other biometric information, such as a unique voiceprint. This voiceprint will be able to be linked to everything I’ve ever said that’s been recorded. Increasingly we’re using our biometric information to unlock services. Charles Schwab lets people access their online financial accounts and verify who they are via voiceprint. You use your face now in many airports to board a plane. Recently, while traveling to London, I entered the country by putting my passport down on the scanner, where there’s an implanted biometric chip. A camera then looked at me, verified I was who my passport said I was, and let me into the country. I didn’t have to wait in a customs line for hours.

“That is going to change what it means to navigate the public world.”

But this will also mean it will be easier to unlock information about us. The most notable example is in New York City. Madison Square Garden has decided to bar lawyers who work at firms that have lawsuits against Madison Square Garden or its parent company, MSG Entertainment. Therefore, any time one of these thousands of lawyers tries to go to a concert, show, or sporting event at MSG, Radio City Music Hall, or Beacon Theater, they get stopped at the metal detector and turned away. They are then told they’re not welcome until the firm they work for drops the case or the litigation is resolved.

This would be a world in which our face is linked to who we are. Imagine writing a bad Yelp review about a restaurant, and then upon your next visit, they say you’re no longer welcome there.

4. We really need to rethink what we make publicly available.

We have to rethink the public commons of the internet. What these facial recognition companies have built was only possible because we all put a bunch of photos online in the first place. Facial recognition technology only got so good because there were billions of faces to train it. It is now able to recognize each of us because we have put our own photos out there, or others have put our photos out there.

Artificial intelligence is getting so powerful because it is able to crunch so much data. OpenAI and other AI companies create things like ChatGPT, which is simply scraping text and data readily available on the Internet. Everything public is being used in new ways.

It’s been a problem for artists who are seeing their art get sucked up. It’s a problem for publications like the New York Times which no longer wants OpenAI to be scraping their content. We have to think about what we are putting out there and how it will be used because anything that is public is going to be scraped. This scraped information, in turn, might be used by AI in ways that we can’t predict.

Many of the people putting their photos on the internet for the last 20 years were probably not thinking that one day facial recognition technology was going to come along and link all their photos together. It was hard to predict, but now it is our reality.

5. Laws work.

This feels kind of funny, and it’s only to an extent, but some people have more rights over their faces than other people.

For example, people who live in Illinois have better-protected faces than the rest of the American population. That is because Illinois passed an incredibly prescient law in 2008 called the Biometric Information Privacy Act, or BIPA. BIPA says that if you want to use people’s biometric information, including their face, voiceprint, or other bodily information, you must get their consent. A company that wants to use Illinoisian’s face prints needs to get their consent first or pay $5,000 per violation. For that reason, companies don’t offer certain products there. Google’s Nest cam doesn’t do facial recognition there the way it does in other states. Clearview AI has gotten in a lot of trouble in Illinois.

“Privacy laws actually matter and they actually work.”

In Europe, there are privacy laws that prohibit what Clearview AI has done, and it has forced Clearview so far as to not do business in Europe. Privacy laws actually matter and they actually work.

We as a society have a decision to make right now. How widespread do we want facial recognition to be? Do we want to be subject to this? Do we want to have our photos pulled into this? Do we want everyone with photos on the internet to have the experience of famous people, being recognized everywhere they go?

Imagine you’re at a restaurant, and you think you’re anonymous while having a sensitive conversation. With a tool like Clearview AI, somebody next to you could take your photo, know who you are, and suddenly understand the context of that private conversation. This new technology will drastically change what it means to be out in public.

To listen to the audio version read by author Kashmir Hill, download the Next Big Idea App today:

Listen to key insights in the next big idea app

the Next Big Idea App

app-store play-market