CAI Logo

Question Type Classification From Scanpath on Information Visualisations


Description: The goal of the project is to develop a model using the SalChartQA dataset to estimate the types of questions during CQA tasks. The ultimate objective of this research is to infer human intentions from gaze behavior and to assess the effectiveness of chart structures. The primary motivation for this project is to enable early assessment of comprehension levels in tasks involving visual information, such as charts. This approach provides a quantitative way to evaluate whether the design of charts effectively facilitates responses to questions. By analyzing how users interact with charts, we aim to improve the design and utility of visual information tools to better support user comprehension and response accuracy.

Task:
  • Survey on related works.
  • Implement a simple model.
  • Conduct a thorough analysis of the results.
Target Conferences: ETRA,CHI,UIST,ACMMM

Supervisor: Takumi Nishiyasu and Yao Wang

Distribution: 10% Literature, 70% Implementation, 20% Analysis and Evaluation

Requirements: Experience with Python, ideally some knowledge of statistics

Literature: [1] Susanne Hindennach, Lei Shi, and Andreas Bulling. 2024. Explaining Disagreement in Visual Question Answering Using Eye Tracking. In 2024 Symposium on Eye Tracking Research and Applications (ETRA ’24), June 4–7, 2024, Glasgow, United Kingdom. ACM, New York, NY, USA, 7 pages. Paper link.

[2] Yao Wang, Yue Jiang, Zhiming Hu, Constantin Ruhdorfer, Mihai Bâce, Andreas Bulling. 2024. VisRecall++: Analysing and Predicting Visualisation Recallability from Gaze Behaviour. In Proc. ACM on Human-Computer Interaction (PACM HCI). Paper link.

[3] Yao Wang, Weitian Wang, Abdullah Abdelhafez, Mayar Elfares, Zhiming Hu, Mihai Bâce, Andreas Bulling. 2024. SalChartQA: Question-driven Saliency on Information Visualisations. In Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 1–14, 2024. Paper link.