Dressed in a green fleece jacket, Thrun warmly receives visitors at Google's headquarters in Mountain View, California, not far from Stanford. Just outside the entrance, Google employees playing beach volleyball are living representations of the company's aura of eternal corporate coolness. The formalities required for entry into the building, on the other hand, are roughly as strict as those at the Pentagon, and the company spokesman urges visitors to hurry. Google doesn't have all day.
Thrun, the computer science professor from Germany, has become an important figure at Google. He helped create Street View, the company's controversial compendium of photographic images of front yards and houses from around the world. Street View, Thrun explains, was a useful exercise in preparation for the autonomous vehicle project, since the streets that self-driving cars travel will also need to be thoroughly photographed first, although with a focus on a different type of data. Google Maps' collection of images, in other words, cannot be directly used for driverless vehicles.
Google's self-driving cars draw on a detailed directory of every street, building and bridge, all of which is stored on computer servers. Cameras and laser scanners mounted on vehicles check the images they receive from their surroundings against what is in this directory. In other words, this system's precision and reliability rely entirely on computing power -- something that is increasing at a furious pace.
Moore's Law, a standard principle of computer science, effectively posits that processing power doubles every two years. The first Intel processor, built in 1971, had 2,300 transistors. Today's standard microchip holds over 2.5 billion. And as computers' processing speed increases, so does the reliability of robotic cars. Just a year ago, Thrun says, the test operators of these cars had to intervene an average of once every 8,000 kilometers (5,000 miles) to correct a mistake on the part of the automatic driving system. "Now we can drive 80,000 kilometers without having to intervene," he says.
That's impressive, but not yet a breakthrough. A human driver who made a serious mistake once every 80,000 kilometers wouldn't exactly be held up as a model driver -- and a computer that does so certainly won't be.
But Thrun tries to put things in perspective. His self-driving cars, he says, don't make careless mistakes. The cameras never ignore a red light, and the radar reliably prevents rear-end collisions. "In those areas," he says, "robotic vehicles are already better than humans."
The driverless vehicles are worse, though, at reliably identifying objects. "That's something we humans are incredibly good at," Thrun explains. He picks up objects from the conference table in front of him to illustrate his point: "Here, a telephone. A roll of tape. It's not something we have to think hard about."
The cars' cameras see these things just as clearly as the human eye does, but the computers take longer to assess whether or not it would be dangerous to drive over them. What this means is that a robotic car will slam on the brakes even when the object in question turns out to be just a cardboard box blowing down the street, because it can't immediately assess whether the box isn't actually a baby carriage. And if suddenly braking for a cardboard box in the road causes a collision with the car behind, who is liable?