CAI Logo

Deep Gaze Pooling: Inferring and Visually Decoding Search Intents From Human Gaze Fixations

Hosnieh Sattar, Mario Fritz, Andreas Bulling

Neurocomputing, 387, pp. 369–382, 2020.



We propose a Gaze Pooling Layer that leverages gaze data as an attention mechanism in a trained CNN architecture. Our methods using this new layer predict the target of visual search in terms of categories and attributes from users’ gaze as well as decode gaze data into a visualization of the search target.

Abstract

Predicting the target of visual search from human eye fixations (gaze) is a difficult problem with many applications, e.g. in human-computer interaction. While previous work has focused on predicting specific search target instances, we propose the first approach to predict categories and attributes of search intents from gaze data and to visually reconstruct plausible targets. However, state-of-the-art models for categorical recognition, in general, require large amounts of training data, which is prohibitive for gaze data. To address this challenge, we further propose a novel Gaze Pooling Layer that combines gaze information with visual representations from Deep Learning approaches. Our scheme incorporates both spatial and temporal aspects of human gaze behavior as well as the appearance of the fixated locations. We propose an experimental setup and novel dataset and demonstrate the effectiveness of our method for gaze-based search target prediction and reconstruction. We highlight several practical advantages of our approach, such as compatibility with existing architectures, no need for gaze training data, and robustness to noise from common gaze sources.

Links


BibTeX

@article{sattar20_neurocomp, title = {Deep Gaze Pooling: Inferring and Visually Decoding Search Intents From Human Gaze Fixations}, author = {Sattar, Hosnieh and Fritz, Mario and Bulling, Andreas}, journal = {Neurocomputing}, year = {2020}, pages = {369–382}, volume = {387}, doi = {10.1016/j.neucom.2020.01.028} }