Datasets
SalChartQA
ACM CHI 2024
Dataset contents: 6,000 question-driven attention maps on 3,000 visualisations, with 74,340 answers from crowdsourcing workersNumber of participants: 165
MSCOCO-EMMA and FigureQA-EMMA
Proc. the 45th Annual Meeting of the Cognitive Science Society (CogSci 2023)
Dataset contents: Large-scale cognitively plausible synthetic gaze data on corresponding images in the full MSCOCO and FigureQA datasetsNumber of participants: N/A
VisRecall
TVCG (IEEE VIS 2023)
Dataset contents: 200 information visualisations that are annotated with crowd-sourced human recallability scores obtained from 1,000 questions from five question typesNumber of participants: 305
ConAn
ICML 2021
Dataset contents: Four showcase videos taken with a 360° camera (Insta360 One X) depicting different interactionsNumber of participants: N/A
VQA-MHUG
ACL ConLL 2021
Dataset contents: Multimodal human gaze data over textual questions and their corresponding imagesNumber of participants: 49
MovieQA-Reading Comprehension (MQA-RC)
ACL ConLL 2020
Dataset contents: Question-answer pairs, eye-tracking extension to the MovieQA datasetNumber of participants: 23
Everyday Mobile Visual Attention (EMVA)
ACM CHI 2020
Dataset contents: Video snippets, usage logs, interaction events, sensor dataNumber of participants: 32
DEyeAdicContact
ACM ETRA 2020
Dataset contents: Fine-grained eye contact annotations for 74h of YouTube videos (videos not included)Number of participants: N/A
MPIIDPEye: Privacy-Aware Eye Tracking Using Differential Privacy
ACM ETRA 2019
Dataset contents: Eye tracking data, eye movement features, ground truth annotation
Number of participants: 20
PrivacEye: Privacy-Preserving Head-Mounted Eye Tracking Using Egocentric Scene Image and Eye Movement Features
ACM ETRA 2019
Dataset contents: First-person video dataset; Data Annotation, Features and ground truth, Video frames and ground truth, Private segments statisticsNumber of participants: 17
MPIIMobileAttention: Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable
ACM MobileHCI 2018
Dataset contents: Everyday mobile phone interactionsNumber of participants: 20
MPIIEgoFixation: Fixation Detection for Head-Mounted Eye Tracking Based on Visual Similarity of Gaze Targets
ACM ETRA 2018
Dataset contents: Data files, ground truth files (fixation IDs, start and end frame of corresponding scenes)Number of participants: 5, 2300+ fixations in total
InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation
ACM IMWUT 2017
Dataset contents: 280,000 close-up eye imagesNumber of participants: 17
MPIIFaceGaze: It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation
IEEE TPAMI 2019, IEEE CVPRW 2017
Dataset contents: MPIIGaze dataset augmented with human facial landmark annotation, pupil centers, face regionsNumber of participants: N/A
Labeled pupils in the wild (LPW): A dataset for studying pupil detection in unconstrained environments
ACM ETRA 2016
Dataset contents: Eye region videos (95 fps, head-mounted eye tracker)Number of participants: 22
3DGazeSim: 3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers
ACM ETRA 2016
Dataset contents: 7+ hours of eye-tracking, 10 recordings per participant, 2 per depth/5 different depthsNumber of participants: 14
MPIIEmo
ACM ACII 2015
Dataset contents: 224 sequences, 8 viewpoints per sequence, 1792 video filesNumber of participants: 16
Discovery of Everyday Human Activities From Long-term Visual Behaviour Using Topic Models
ACM UbiComp 2015
Dataset contents: 80+ hours of eye tracking data, ground truth annotationsNumber of participants: 10
MPIIGaze: Appearance-Based Gaze Estimation in the Wild
IEEE TPAMI 2019, IEEE CVPR 2015
Dataset contents: 213 659 imagesNumber of participants: 15
Prediction of Search Targets From Fixations in Open-World Settings
IEEE CVPR 2015
Dataset contents: Fixation dataNumber of participants: 18
Recognition of Visual Memory Recall Processes Using Eye Movement Analysis
ACM UbiComp 2011
Dataset contents: 7 hours of electrooculography (EOG) data, ground truth annotationsNumber of participants: 7
Eye Movement Analysis for Activity Recognition Using Electrooculography
IEEE TPAMI 2011, ACM UbiComp 2009
Dataset contents: 8 hours of electrooculography (EOG) data, 2 experimental runs per participant, full ground truth annotationsNumber of participants: 8
Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography
ACM TAP 2012, IEEE Pervasive 2008
Dataset contents: 6 hours of electrooculography (EOG) data, 4 experimental runs per participant, full ground truth annotationsNumber of participants: 8