Face Recognition: Clever or Just Plain Creepy?
New photo programs include revolutionary face-spotting technology.
March 2, 2009— -- We have more than 25,000 digital photographs stored on our computer hard drives--most of them of people. Until now, our sole means of tracking down a familiar face was to search manually: by date, EXIF data, "tags," or the brute force of our own memory. Now computers can do the searching, thanks to the nifty face-recognition feature that Apple and Google have put into the latest versions of their photo-management systems.
Face recognition was one of those brilliant but technically iffy and ethically tricky counterterrorism technologies deployed as a result of the September 11 attacks. The idea was to automatically screen out terrorists as they walked through security checkpoints--only it didn't work out that way: at a test in Tampa, for example, airport employees were correctly identified just 53 percent of the time.
Civil-liberties groups also raised concerns about false positives--people being mistakenly identified as terrorists, and possibly arrested, just because of their looks. And so, without a demonstratable benefit, face recognition largely dropped off the public's radar.
That's the public's radar, mind you. Many countries, including the United States, quietly revised their requirements for passport photos to make them friendlier to face-recognition software. The National Institute of Standards and Technology, which had been testing the technology since 1994, conducted large-scale face-recognition tests in 2002 and 2006.
Oregon and some other states began using face recognition to detect when one person tries to obtain a license under different names. And all the time, the technology kept getting better. Much better.
In order to have a functioning face-recognition system, a computer must first be able to detect the face--that is, given a photograph, it must be able to find the faces in it. Technically, this is easier and more reliable than identifying a particular person. This technology was pretty much perfected just after September 11, 2001. The result: face-detection systems began appearing in digital cameras and camcorders a few years ago.
These algorithms generally work by searching for objects that look like eyes, a nose, and perhaps something that's kind of round. They identify boxes where faces are likely to be, and then tell the autofocus system what part of the photo needs to be in focus. After all, everybody hates it when Grandma's eyes are blurry, right?
So face recognition starts with face detection. The face is then rotated so that the eyes are level and scaled to a uniform size. Next, one of three different technical approaches kicks in. Each of these approaches is, of course, covered by its own set of patents and bundled into various vendor offerings. One approach transforms the face into a mathematical template that can be stored and searched; a second uses the entire face as a template and performs image matching.
And a third approach attempts to create a 3-D model based on the face, and then performs some kind of geometric matching. Based on our experience with the software, we believe that Apple's system is using a landmarks approach, while the Google system is doing some kind of image matching. But we could be wrong. Neither company has publicized which algorithms it is using.