von Andreas Siegel
In this thesis, we discuss and present two different approaches of gaze-supported multimodal interaction in the context of zoomable information spaces. As a more concrete application context we choose geographic information systems (GIS) with Google Earth as a representative.
Firstly, we investigate how conventional computer desktop interaction can be enhanced by a novel combination of gaze and foot input. This offers the potential for fluently performing manual tasks (e.g., object selection) and supporting navigation tasks (e.g., pan and zoom) in zoomable information spaces in quick succession or even in parallel. For this, we take advantage of fast gaze input to implicitly indicate where to navigate to and additional explicit foot input for speed control while leaving the hands free for further manual input. This allows for using gaze input in a subtle and unobtrusive way. We carefully investigated three variants of gaze-supported foot controls for pan and zoom incorporating one-, two- and multi-directional foot pedals. These were evaluated and compared to mouse-only input in a preliminary user study with 12 participants in a geographic information system. The results suggest that gaze-supported foot input is feasible for convenient and user-friendly navigation in zoomable information spaces, even resulting in comparable results to mouse input. However, further fine-tuning is still required for a more efficient use.
In the second part of this thesis, we leave the computer desktop setting and investigate gaze-supported multimodal navigation on a large interactive display wall (powerwall). The envisioned interaction technique combines gaze input with tracking of the user’s position and motion to enable distant interaction taking account of the user’s point of regard. We also consider multi-touch input via both a hand-held device and the interactive display wall as an additional modality and input channel for a rich set of interaction possibilities. The application from the first part forms the basis for further investigations in this new interactive environment. Distant eye tracking on the display wall basically works quite well, but our work shows that further adjustments are needed to completely meet the challenges accompanying with users who are free to move around in an interactive environment. On this basis we discuss limitations and challenges of the envisioned interaction techniques and outline possibilities for future work on this topic.
In conclusion, we reflect on the general appropriateness of the proposed concepts and interaction techniques for the intended application context and more general purposes in zoomable information spaces.