Interface Controlling

From NUI Group Community Wiki

Jump to: navigation, search

Introduction My project was focused on interactions with the multi-touch surface in a bar environment. Since that project is over, I'm more interested in continuing this on an experimental base with some goals.

One of these goals is to create a working interface which could make me control an interface, the most common interface is the Windows interface. It would be cool to use multiple inputs to control windows. I know it's already done but I don't have the proper (free) software to compile or know how to work with C and such programming languages. Also I've not seen any examples for me to grab elsewhere yet. I saw Touchlib but I have no idea what this does and have no idea what to do with those files.

With most of the things I start from scratch and since I'm educated as an industrial designer I will start with what most of these designers do:

I will update this wiki when progression permits...feel free to add/comment

Analyses of the subjects List of requirements Preliminary solutions Plan du campagne Execution of the plan...

Analysis of the subject in a nutshell Jeff Han introduced a new way for multi-touch interaction, in January 2006 the internet community was amazed by the things he could do with his fingers on just a panel of plastic with some lights shining in from the sides. The method he used is multi-touch through FTIR (Frustrated Total Internal Reflectance). Infra-Red lightsources shine in from the side of the acrylic panel he uses and stay within the panel until an object makes direct contact with the acrylic. Making contact with the panel forces the light to leave the acrylic. The surface of the acrylic lights up where contact is made forming an image that is processable by a computer using this to compute multiple functions.

Source: Jeff Han

General touchpanelsurfaces only make single touch possible or these average multiple signals to a single touch. There are of course other techniques which make multiple signal processing possible one of them is the common RFID found in Wacom tablets. Multi-touch through machine vision is a cheaper solution and requires less advanced information about the technology behind it. (because it's actually really simple) ... The goal is to create an intuitive interface through software using the method described above (multi-touch through FTIR).


List of requirements

  • The ability to track multiple inputs
  • Intuitive Interface
  • Gesture recognition...
  • Manipulate and handle objects on screen
  • ...

Preliminary solutions Tracking multiple inputs The video of Jeff Han is what brought our interests together and we've all seen how this works. The method of preference is therefore multi-touch via FTIR. This method allows envisioning multiple inputs with affordable hardware. The (IR)camera captures the objects(in this case fingertips) that light up on the surface through FTIR. The image is sent to the computer and the software analyses the image. Several methods can be used for analysis:

  • Blob tracking
  • Contour tracking
  • Difference tracking (with a presampled image)
  • Motion tracking
  • ...

Intuitive interface What makes an interface intuitive? If you are used to use Windows you will have a different experience compared to someone who has no experience with Windows... ...

Gesture recognition Some standard easy (for the software to see and for the user to learn) gestures should be made possible on the interface. How can we make the software recognize gestures?

  • Comparing for certain shapes?
  • Looking for certain vector changes?
  • ...

Manipulate and handle objects on screen We've all seen the fancy photo browsing stuff, now we want to do it for ourselves. - In what mode are we in? The software needs to know what mode we are in to combine each signal with the specific function ... - Rotating objects

 *One finger stationair and the other pivots around it.
 *Two fingers pivot around their center of gravity.
 *...

- Moving objects

 *One finger to move an object using the vector of the two coordinates of the same finger between two frames?
 *Pointing one object (click?select?) and pointing to it's new position.
 *Using two fingers and move it to new position?
 *...

- Selecting (multiple) objects

 *Press on one object, software analyses if finger input is within area of object.
 *Press sequentially on each object to select more...
 *Draw a shape around (an) object(s). Selects objects when they are within area of shape.

- Creating objects - Removing objects