I Think I Know You – Part 2
Created: 19 Aug 2011 16:30:16 GMT | Translations available: 日本語
In 2004, Massachusetts Senator Edward “Ted” Kennedy was refused an airline boarding pass by the Transportation Security Administration (TSA) on five different occasions. Despite being from one of the most famous families in American politics, not to mention being a U.S. Senator, he still appeared on a no-fly list designed to prevent terrorists from boarding airplanes. This was a mistake; one that took three weeks to clear up. No explanation was ever publicly given. One has to assume that there was someone else, presumably a suspected terrorist, with a similar name.
I was reminded of that incident at Black Hat, where Alessandro Acquisti from Carnegie Mellon University presented a paper called, “Faces of Facebook: Privacy in the Age of Augmented Reality” (which is also the starting point for the first part of this series).
The TSA starting testing facial recognition software in 2003. Eight years is a long time in software development. Given the advances in commercial software, if facial recognition has yet to be installed in airports, it’s not because of any technology limitation (unless we consider accuracy…more on that later.)
The use of facial recognition by the government goes well beyond airports and the TSA, though. And it is certainly not restricted to the United States. The South Korean government has taken photographs of over 23,000 people since 2003, and they have used facial recognition software to match them to photos and names in resident and driver registration databases.
Police in Vancouver reportedly used facial recognition software to try and identify people who participated in riots there this past June. No word on which was more successful, using facial recognition or finding those who boasted of their rioting skills on Facebook. Beyond this, though, Facebook played an additional role in that a Facebook page was created whereby people could post photos they took of rioters in order to help the police.
A tool called MORIS is soon to be released for law enforcement agencies. It’s a mobile device that will be able to scan fingerprints, irises and facial features, enabling the police to identify a suspect without even taking them back to the station. It will be sold by a private company that manages their own database.
The FBI is working to improve access to its fingerprint database with a project called NGI, Next Generation Identification. And they are working on an initiative that will, “also explore the capability of facial recognition technology.”
These are just some of the examples I was able to find with a quick Internet search. Presumably, a deeper search would reveal a great many more.
The promise of this sort of tool has to be very appealing to those in law enforcement. Just think of all the other ways it could be used. Say you were on the look out for terrorists or criminals trying to use identify theft to get legitimate forms of identification. A quick check of facial recognition software would not only prevent you from issuing the ID, it would also call out the cops. According to the Boston Globe, at least 34 states are using such systems to review driver’s licenses for identity theft.
But what if you don’t have access to a government database of photos or of photos helpful citizens gave you, yet you want to identify someone from a picture? This is the problem. Professor Acquisti and his team tried to solve this and what they reported at BlackHat was that they could do pretty well with off-the-shelf facial recognition software and cheap webcams. Where did they get their database of photos? Facebook, of course.
Facebook has an estimated 100 billion photos. Many of them are conveniently tagged with user names, and many of those are in accounts where users have left them “wide open,”—in other words, with no security that would restrict who has access to those photos. All of Acquisti’s team’s work was done using publicly available photos.
So what is there to worry about? What’s wrong with being better at catching thieves and terrorists? Not a darn thing. But, this is where Ted Kennedy comes in: two people having the same name is pretty common, but few of us are as well known as Edward Kennedy was; if a mistake like that can happen with names, it’s going to happen with faces.
They say no two faces are the same. But we are talking about software trying to do a very, very difficult task. There will be mistakes. In fact it didn’t take me very long to find an example. The goal of the program in Massachusetts in this example actually sounds pretty good. Nobody wants the bad guys getting their hands on legitimate driver’s licenses. And they do have a plan to correct mistakes.
Of course it hasn’t happened to me, so I didn’t have to go through the hassle of proving who I was. With facial recognition software, you can be guilty of looking like someone else till proven innocent.
Of bigger concern is what happens when facial recognition software is used everywhere. What happens if I get refused at the ATM or get turned away at a business because I look like someone who’s stolen credit cards? I may not even get told that it was my face that caused the problem. If Ted Kennedy couldn’t find out why they thought he was a terrorist, what are my chances?