Current architectures for multi-modality tasks such as visual question answering suffer from their high complexity. As a result, these architectures are difficult to train and require high computational resources. To address these problems we present a CLIP-based architecture that does not require any fine-tuning of the feature extractors. A simple linear classifier is used on the concatenated features of the image and text encoder. During training an auxiliary loss is added which operates on the answer types. The resulting classification is then used as an attention gate on the answer class selection. On the VizWiz 2022 Visual Question Answering Challenge we achieve 60.15 % accuracy on Task 1: Predict Answer to a Visual Question and AP score of 83.78 % on Task 2: Predict Answerability of a Visual Question.
翻译:用于多模式任务的现有结构,如视觉答题等,其复杂性很高。因此,这些结构难以培训,需要大量计算资源。为了解决这些问题,我们提出了一个基于 CLIP 的架构,不需要对地物提取器作任何微调。图像和文字编码器的连接特性使用简单的线性分类器。在培训过程中添加了辅助损失,该损失在回答类型上运作。由此产生的分类随后被用作答案类选择的引人注意门。在VizWiz 2022视觉问题回答挑战上,我们实现了60.15%的精确度,任务1:对视觉问题的预测回答和关于任务2:视觉问题的预测可答性,任务2:视觉问题的预测可答性,任务2:视觉问题的预测可答性。