Summary

Smartphones, tablets, and in-vehicle interfaces are incredibly complex, and users interact with them in a variety of situations and dynamic environments. Our goal is to improve interaction with these devices - to help users find the information they need and complete tasks as efficiently as possible. Understanding how users visually sample an interface is key to achieving this goal. Unfortunately, traditional user studies can be resource-intensive and time-consuming, making them impractical for evaluating the wide range of features and contexts found in modern devices.

In this talk, I'll discuss how computational models that simulate human behavior can provide a compelling alternative. I'll present our research on data-driven models trained to evaluate in-vehicle information systems, and explore how reinforcement learning and computational rationality offer a new way to model human visual attention allocation.