We constantly come across interesting user interface technologies; Microsoft’s Natal was one I’ve written about before. There have been others pushing the limits of human computer interaction. However these technologies are a long way from maturing and being available to average developers and users. Also, they demand a completely new way of thinking about user interfaces. Such is not the case with haptic interfaces, which are now rapidly going mainstream and are available with a variety of devices, ranging from mobile phones, to tablet computers and ‘internet devices’. While these may appear as simply replacing the mouse with a singular interface point touch-screen, that’s not always the case. Several of these devices support multi-touch, letting the users use not one but multiple fingers to provide inputs that go beyond just a point and click to gestural inputs. Additionally they also offer some form of force feedback, this admittedly is quite rudimentary at this time.
This video from GUI10 elegantly illustrates the concept of multi-touch and discusses some of the key issues associated with its use.
10/GUI from C. Miller on Vimeo.
This video was created to serve as the beginning of a discussion on the design and use of possible multi-touch interface. At this time, unlike the mouse/pointer UI, there is no standard for multi-touch. As vendors rush to offer haptic interfaces with their devices, each creates their own standard; this has killed interoperability and led to the profusion of options available in the market today. If multi-touch is to flower and become a common UI paradigm, a standard must evolve.
A widespread adoption of multi-touch interfaces will prove to be a fundamental shift in human computer interaction. Such a shift will have huge implications of electronic learning which has always relied on the mouse as the primary interface method. Learning interaction designers will have to adopt a ‘muti-touch mindset’, this in turn demands a radical rethink of how we have looked at learning interaction.
Immediately at this point, I feel the use of some elements would necessarily be a part of this mindset: (and I may be completely wrong about these)
- Natural gestures – The simplest gestures that function effectively across cultural/linguistic boundaries (are there any such? – the pinch/squeeze for zoom in/out of the display) would probably form the basis for the interaction. These would be recognized consistently across all multi-touch platforms equating to the point and click of a mouse.
- Symbolic gestures – Symbolic gestures that function within a specific cultural/linguistic context or are arbitrarily determined by the UI developers. These symbolic gestures could be tied to a variety of responses from the system.
- Sequential gestures – Using a combination of natural and symbolic gestures for the system to respond in a specific manner. This is just like the use of a staged interaction in elearning, except that a series of ‘point and click’ is replaced by gestures that can be recognized by the system.
As I list these, I realize that this line of thinking still draws on the point-and-click metaphor for UIs and that isn’t quite appropriate. As more and more multi-touch interfaces hit the market, elearning designers like me will have to utilize this new way of human computer interaction. I for one, find myself quite unprepared. While this new paradigm of human computer interaction will prove daunting to master, it’s going to be an exciting wild ride till we do.