How can we develop appropriate and natural forms of communication between humans and computers? In the future, computers will hardly be perceived as such, but will be seamlessly integrated into our living and working environments, e.g. in cyber-physical systems. How can people be effectively supported with their experiences, sensory organs, gestural means of expression and social needs?

Our work focuses on the systematic and fundamental research of different, preferably natural interaction modalities and their synergistic combination. We are researching gestural multi-touch interaction, digital pens in combination with digital paper, the efficient combination of touch and pen input, gestural interaction with hands, head and feet, gaze-supported interaction for remote displays, and tangibles. Our focus is on small and large interactive surfaces (mini-displays, smartphones, tablets, tabletops, high-resolution large wall displays) and their effective combination in multi-display environments.

We apply the developed techniques and principles in various application domains and projects with academic and industrial partners from Germany and abroad. We also investigate how modern user interfaces and interaction techniques can be used effectively in the fields of interactive information visualization, music informatics and semantic web.

Recent Publications

  • The Concrete Evonne: Visualization Meets Concrete Domain Reasoning.

    Alrabbaa, C.; Baader, F.; Dachselt, R.; Kovtunova, A.; Méndez, J.;

    @inproceedings{ABDKM25,
       author = {Christian Alrabbaa and Franz Baader and Raimund Dachselt and Alisa Kovtunova and Juli\'{a}n M\'{e}ndez},
       title = {The Concrete Evonne: Visualization Meets Concrete Domain Reasoning}
    }

  • The Concrete Evonne: Visualization Meets Concrete Domain Reasoning (Extended Abstract).

    Alrabbaa, C.; Baader, F.; Dachselt, R.; Kovtunova, A.; Méndez, J.;

    @inproceedings{eaABDKM25,
       author = {Christian Alrabbaa and Franz Baader and Raimund Dachselt and Alisa Kovtunova and Juli\'{a}n M\'{e}ndez},
       title = {The Concrete Evonne: Visualization Meets Concrete Domain Reasoning (Extended Abstract)}
    }

  • The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels.

    Baader, J.; Ellenberg, M.; Satkowski, M.;

    @inproceedings{baader2025authoring,
       author = {Julian Baader and Mats Ole Ellenberg and Marc Satkowski},
       title = {The Invisible Hand of the Context: Authoring of Context-Aware Mixed Reality Labels},
       numpages = {6},
       doi = {https://doi.org/10.1145/3743049.3748552},
       keywords = {Mixed Reality, Labeling, Label Authoring, Context Aware Labels, Mixed Reality Labels}
    }

  • Face Off: External Tracking vs. Manual Control for Facial Expressions in Multi-User Extended Reality.

    Krug, K.; Song, X.; Büschel, W.;

    @inproceedings{krug2025face,
       author = {Katja Krug and Xiaoli Song and Wolfgang B\"{u}schel},
       title = {Face Off: External Tracking vs. Manual Control for Facial Expressions in Multi-User Extended Reality},
       numpages = {5},
       doi = {https://doi.org/10.1145/3743049.3748590},
       keywords = {Mixed Reality, Collaboration, Facial Expression, Avatars}
    }

  • @inproceedings{Weber-2025-DebuggingAsEpisodes,
       author = {Max Weber and Alina Mailach and Sven Apel and Janet Siegmund and Raimund Dachselt and Norbert Siegmund},
       title = {Understanding Debugging as Episodes: A Case Study on Performance Bugs in Configurable Software Systems},
       numpages = {23},
       doi = {10.1145/3717523}
    }