Your Robotic Personal Assistant

New software lets robots pick up objects they have never seen before.

ByABC News
February 19, 2009, 6:52 AM

Nov. 11, 2007 — -- Aside from the Roomba, robots haven't made much progress infiltrating American homes. But researchers at Stanford University have developed software that overcomes one of the biggest challenges: teaching a robot how to pick up an object it has never encountered before. The robot's software suggests that the best way to pick up something new is by determining the most grabable part of the object--the stem of a wineglass, the handle of a mug, or the edge of a book, for instance.

Engineers and science-fictions fans have long dreamed of putting robotics in the home, says Andrew Ng, professor of computer science at Stanford. In fact, the robotic hardware that exists today could allow a robot to do the complex tasks that are required to pick up objects, keep a house clean, and so on. But the missing piece, Ng explains, is software that can allow robots to do these things by themselves. A dexterous robot with the smarts to pick up new objects without being specifically programmed to do so could be useful for complex domestic tasks such as feeding the pets and loading the dishwasher.

While it's true that some robots are capable of picking up specific objects, even on a cluttered table, they do so with the help of specific three-dimensional models that have been preprogrammed, says Aaron Edsinger, founder of Meka Robotics, a startup in San Francisco. "But this assumes that we're going to be able to know ahead of time what objects are out there," he says. This might be inessential in a carefully constructed nursing home, for instance, but it would be essential in a busy family's apartment or house.

Instead of using predetermined models of objects, some roboticists, including Edsinger and Ng, are building perception systems for robots that look for certain features on objects that are good for grasping. The Stanford team has approached the problem by collecting a number of previously fragmented technologies, says Ng, such as computer vision, machine learning, speech recognition, and grasping hardware, and put them together in a robot called STAIR (Stanford Artificial Intelligence Robot).