Last month I had the pleasure to attend the ETRA 2016 symposium in Charleston, USA. This conference focuses on all aspects of eye movement research across a wide range of disciplines. Computer scientists, engineers and behavioral scientists come together to bring their common vision of moving eye tracking research and its application forward, and expanding its impact. This year many good and interesting talks were held during the conference and now I want to talk about two papers I found particularly interesting:
The keynote talk of Ben Tatler (University of Aberdeen, Scotland) with the title Everyday vision: sampling and encoding information in natural settings was addressing how the gaze control of humans is linked to our behavior, while we are solving a task. Tatler and his research team performed many experiments in order to get insights into the relation between gaze control for information gain and task solution behavior. Eye tracking reveals here, e.g., that the gaze precedes interactions with objects or their surrounding area. Therefore, the visual system should not be seen as a single component but rather as part of a broader network consisting of vision, planning, and actions.
Preethi Vaidyanathan (Rochester Institute of Technology, New York, USA) proposed her automatic method to assign labels to fixations. These labels were derived through a textual analysis of transcriptions of verbal descriptions, recorded during an experiment. In this particular case, dermatologists were explaining their procedure during an examination of pictures, showing different kinds of skin diseases. If such transcriptions are available and can be synchronized with recorded eye tracking data, this method provides more accurate and meaningful labels compared to labels derived by conventional AOI-based approaches. This work is explained in her paper Fusing Eye Movements and Observer Narratives for Expert-Driven Image-Region Annotations that was awarded as the Best Paper by the ETRA reviewer team.