2012-07-26



Google has announced the launch of a new open-source API that enables developers to design and build rooms that recognise, interact and engage with a person present. Named, imaginatively enough, Interactive Spaces, the downloadable application uses a series of “event producers”, such as depth cameras that track an individual’s movements or pressure sensors that signal when they have been stepped on, and “event consumers” receivers that respond to signals from the producers.

These consumers could be lights that switch on or a voice signal telling people to get off the carpet they have just stepped on. Any number of consumers and producers can be linked up in numerous ways to generate unique effects Google suggests using gesture recognition to get a speech synthesis machine to tell you it’s not polite to point. Basically, the possibilities are as abundant as a designers’ imagination artists can produce unique interactive installations or engineers can build live gaming environments.

Google uses “blob tracking” as an example, to demonstrate what the system can do. Ceiling-mounted cameras track a person’s movements and send signals to the floor, which then displays coloured circles round the person’s feet. The circles will then follow the individual around the room. Scenarios can be adapted with a few lines of code or original, detailed interfaces written from the ground up. It operates as a run-time environment, meaning the designer has absolute control once an activity is in place and can manipulate it, initiating or halting the components from a central web application.

The application is written in Java and is supported by OS X, Linux and Windows. A scripting bridge enables the use of Javascript, Python and the open source openFrameworks toolkit.

Source: wired

Show more