A Display That Tracks Your Movements

There could be a revolution brewing in billboard advertising. Instead of simply presenting a static image, why not let people interact with the advertisement? This is the vision of electronics giant Samsung and interactive advertising company Reactrix Systems. The two companies have partnered to bring 57-inch interactive displays to Hilton hotel lobbies by the end of the year. These displays can "see" people standing up to 15 feet away from the screen as they wave their hands to play games, navigate menus, and use maps.

With the buzz surrounding the Wii, the iPhone, and Microsoft's Surface, "people are more open and ready to interact using their hands and gestures," says Matt Bell, chief scientist and founder of Reactrix. It's easy to see how a gesture-based interface might work well for video games and virtual worlds, and certainly companies such as Belgian startup Softkinetic make systems for those very needs. But Reactrix is aiming for the out-of-home advertising market, traditionally dominated by large static displays like billboards. Founded in 2001, Reactrix has some experience already: today, its interactive floor displays attract crowds in shopping centers across the country.

The basic idea behind Reactrix's system, and even low-end gesture-based technologies such as the Sony PlayStation Eye, is to use a camera to detect a person's body, and then use computer vision algorithms to make sense of the images. Reactrix and Softkinetic systems differ from the PlayStation Eye, however, in that they record 3-D information as opposed to just two-dimensional information. There are many types of cameras that can capture 3-D scenes, says Bell, but in its current models made with Samsung, the company is using a stereoscopic camera with two lenses. Next to the camera is an infrared light that projects an invisible pattern onto the people in front of the screen. Each lens captures a slightly different view of what's going on, and, based on the disparity in the images, the system can distinguish distance down to a fraction of an inch. Bell adds that the projected light pattern helps the system's accuracy in uneven lighting.

When the camera collects the information, it automatically dumps it into a specialized processor to analyze the depth data, bypassing software that wouldn't be able to compute fast enough. "Once that's done, we have a full-depth image showing the distance to every object," Bell says. At this point, Reactrix's unique algorithms take over. One of the differentiating factors between Softkinetic and Reactrix is that the former focuses on the detailed motion of parts of a single body, whereas the latter strives to disambiguate people and objects. Bell doesn't provide details, but he says that the code is designed to figure out scenarios such as when people are holding hands, or if people are standing shoulder to shoulder.

Page
  • 1
  • |
  • 2
Join the Discussion
blog comments powered by Disqus
 
You Might Also Like...