Traditional input devices limit mobility, accessibility and interactivity. These devices are also unintuitive, require experience, and limit the potential user-base [1]. In fact, for decades, human-computer interaction has largely remained a traditional keyboard and mouse ordeal, even as the hardware used has become orders of magnitude faster. While touch screen devices have improved on some of these aspects of usability, they suffer from some of the above limitations and are almost solely used in high-end mobile devices. A fundamental shift in this paradigm is necessary in order to improve usability and pervasiveness of computer systems. An example of this shift is the Nintendo Wii, a video game console released in 2006, which fundamentally changed how users interacted with game consoles [2]. The driving reason behind the Wii’s popularity is its novel use of accelerometers and infrared tracking to allow more logical and realistic interaction. In particular, this interaction has expanded the user-base to include non-conventional casual and older users who found these interfaces very intuitive and easy to use. Inspired by this approach, we plan to develop a system to evaluate and develop natural and intuitive computer input systems.
We propose I-GEST, a platform for research and development on using infrared sensing devices to evaluate and develop gesture-based intuitive computer input systems. These systems will be coupled with a set of applications to evaluate usability and improve on design. The overall system will include infrared transmitting devices, infrared sensors, gesture recognition, software functionality, and a set of applications. The platform will be designed to be easily extendable for future research. Further, this platform will be utilized to assess and evaluate usability for image manipulation, data-entry and basic communication applications.
In this work, we will exploit cheap wireless remotes originally intended for the Wii alongside other low-cost hardware to construct an easy-to-use sensing system as has been accomplished by [3]. This IR-based sensing system is shown in Figure 1. Light from a bank of IR LEDs reflects of tape worn on user’s fingers, and is then tracked by the IR camera in the Wii remote. Using this input data, the computer then outputs relevant information on the display. When the user moves his or her fingers, this can then be reflected in the display’s output. The Gesture recognition toolset is an integral part of the I-GEST that will provide an interface between the IR-sensing system and target applications. It will integrate existing OCR software with multi-input software specifically needed for gesture recognition. For instance, an extremely important pair of gestures is grab and release. This allows users to freely move over the input space and interact with certain parts of it at will. This cannot be solved by traditional OCR technology since it involves multiple inputs, in this case input captured from the user’s thumb and forefinger. A successful integration of the two will allow easy application development using this toolset.
Using this toolset, we will create one or more sample applications to demonstrate the potential of this unique technology. A suite of applications will be interfaced with this system in order to measure usability. The first application to be written is an image manipulation tool which includes scaling, rotating and morphing of images. It will support multi-point input as a way of effectively controlling the manipulation of images by allowing users to scale and rotate images by finger movement, which is more intuitive than traditional methods. Time permitting, we also wish to implement applications that support basic number recognition for data entry as well as symbols for simple communication. These applications not only serve as examples for potential future developers and researchers, but more importantly shed light on different potentials of computer interaction to those who see computer science as a more limited field.
Overall, I-GEST will enable software to be accessible by many audiences who might normally be either unwilling or unable to use traditional input methods. Our project extends some of those concepts to create an easy-to-use platform to allow unique accessibility to a variety of potential applications, as well as easier future research on the relative effectiveness of these different methods of interaction.