VSA4VQA: Scaling A Vector Symbolic Architecture To Visual Question Answering on Natural Images
Anna Penzkofer, Lei Shi, Andreas Bulling
Proc. 46th Annual Meeting of the Cognitive Science Society (CogSci), 2024.
Oral Presentation
Abstract
While Vector Symbolic Architectures (VSAs) are promising for modelling spatial cognition, their application is currently limited to artificially generated images and simple spatial queries. We propose VSA4VQA – a novel 4D implementation of VSAs that implements a mental representation of natural images for the challenging task of Visual Question Answering (VQA). VSA4VQA is the first model to scale a VSA to complex spatial queries. Our method is based on the Semantic Pointer Architecture (SPA) to encode objects in a hyper-dimensional vector space. To encode natural images, we extend the SPA to include dimensions for object’s width and height in addition to their spatial location. To perform spatial queries we further introduce learned spatial query masks and integrate a pre-trained vision-language model for answering attribute-related questions. We evaluate our method on the GQA benchmark dataset and show that it can effectively encode natural images, achieving competitive performance to state-of-the-art deep learning methods for zero-shot VQA.Links
Paper: penzkofer24_cogsci.pdf
Code: https://git.hcics.simtech.uni-stuttgart.de/public-projects/VSA4VQA
Paper Access: https://escholarship.org/uc/item/26j7v1nf.
BibTeX
@inproceedings{penzkofer24_cogsci,
author = {Penzkofer, Anna and Shi, Lei and Bulling, Andreas},
title = {{VSA4VQA}: {Scaling} {A} {Vector} {Symbolic} {Architecture} {To} {Visual} {Question} {Answering} on {Natural} {Images}},
booktitle = {Proc. 46th Annual Meeting of the Cognitive Science Society (CogSci)},
year = {2024},
volume = {46},
url = {https://escholarship.org/uc/item/26j7v1nf.}
}