The gesturespace is the final thesis of my interaction design studies at the Zurich University of Arts. It is an immersive interactive projection that can be controlled very precisely by simply using your hands and body. Like this, it is possible to have interactive applications e.g. in museums or at expositions. Without the need for any additional tools, you can browse through images and zoom into 3d models – just with your own hands!
The installation was presented for one week at the Zurich University of the Arts. The Feedback from the public was excellent. Strongly appreciated was that it was possible to come and try everything out. When I started the thesis in January, there was some discussion about the usefulness of such an application and whether it was even possible to build all this in just 3 months.
Topics in Interaction Design
This thesis addresses several base research topics in interaction design (touchless IxD in particular):
- Gestures: what defines a gesture – how can they be used to interact with a complex system? How can we distinguish gestures so they don’t interfere with each other?
- Interacting by just moving the own body: this is a very natural method of interacting
- What comes after the multitouch table? What comes after the iPod (technologically speaking)?
- How can we generalize multitouch gestures to be valid also in touchless systems?
- How does interaction help to build immersive systems? When interacting with the gesturespace 3d model, the user will get the illusion of a standing in front of a real, physical 3d model because there is a correspondance between body movement and expected input for the visual system in the brain.
Contribution to research: the Virtual Multitouch Layer
In my thesis, I developed an algorithm for the parallel recognition of hand and body movements. This is being done by inserting an imaginary layer between the camera and the body. This layer seperates hands and body. The detailed algorithm is described in the paper (see related links). The trick is how to calculate the optimal position for this layer – which is being done by using distance statistics. Only using distances of points instead of any recognition algorithm makes the procedure very stable.
Photos out of the design process
[flickr-gallery mode=”photoset” photoset=”72157621335421675″]
Tags: 3d, c++, embodied, illusion, infrared, physical computing, vision
This entry was posted
on Saturday, June 20th, 2009 at 19:28 and is filed under ZHDK IAD.
You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.