VD-GR: Boosting Visual Dialog with Cascaded Spatial-Temporal Multi-Modal GRaphs
Adnen Abdessaied,
Lei Shi,
Andreas Bulling
Proc. IEEE/CVF Winter Conference on Applications of Computer Vision (WACV),
pp. 5805–5814,
2024.
Abstract
Links
BibTeX
Project
We propose VD-GR – a novel visual dialog model that combines pre-trained language models (LMs) with graph neural networks (GNNs). Prior works mainly focused on one class of models at the expense of the other, thus missing out on the opportunity of combining their respective benefits. At the core of VD-GR is a novel integration mechanism that alternates between spatial-temporal multi-modal GNNs and BERT layers, and that covers three distinct contributions: First, we use multi-modal GNNs to process the features of each modality (image, question, and dialog history) and exploit their local structures before performing
BERT global attention. Second, we propose hub-nodes that link to all other nodes within one modality graph, allowing the model to propagate information from one GNN (modality) to the other in a cascaded manner. Third, we augment the BERT hidden states with fine-grained multi-modal GNN features before passing them to the next VD-GR layer. Evaluations on VisDial v1.0, VisDial v0.9, VisDialConv, and VisPro show that VD-GR achieves new state-of-the-art results across all four datasets
@inproceedings{abdessaied24_wacv,
author = {Abdessaied, Adnen and Shi, Lei and Bulling, Andreas},
title = {VD-GR: Boosting Visual Dialog with Cascaded Spatial-Temporal Multi-Modal GRaphs},
booktitle = {Proc. IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
year = {2024},
pages = {5805--5814}
}