Visual Question Answering (VQA) is a challenging task of natural language processing (NLP) and computer vision (CV), attracting significant attention from researchers. English is a resource-rich language that has witnessed various developments in datasets and models for visual question answering. Visual question answering in other languages also would be developed for resources and models. In addition, there is no multilingual dataset targeting the visual content of a particular country with its own objects and cultural characteristics. To address the weakness, we provide the research community with a benchmark dataset named EVJVQA, including 33,000+ pairs of question-answer over three languages: Vietnamese, English, and Japanese, on approximately 5,000 images taken from Vietnam for evaluating multilingual VQA systems or models. EVJVQA is used as a benchmark dataset for the challenge of multilingual visual question answering at the 9th Workshop on Vietnamese Language and Speech Processing (VLSP 2022). This task attracted 62 participant teams from various universities and organizations. In this article, we present details of the organization of the challenge, an overview of the methods employed by shared-task participants, and the results. The highest performances are 0.4392 in F1-score and 0.4009 in BLUE on the private test set. The multilingual QA systems proposed by the top 2 teams use ViT for the pre-trained vision model and mT5 for the pre-trained language model, a powerful pre-trained language model based on the transformer architecture. EVJVQA is a challenging dataset that motivates NLP and CV researchers to further explore the multilingual models or systems for visual question answering systems. We released the challenge on the Codalab evaluation system for further research.
翻译:视觉问答(VQA)是自然语言处理(NLP)和计算机视觉(CV)中的一个具有挑战性的任务,吸引了众多研究人员的关注。英语是一种资源丰富的语言,已见证了视觉问答数据集和模型的多种发展。其他语言的视觉问答也需要针对资源和模型进行开发。此外,目前没有面向特定国家视觉内容及其自身物品和文化特点的多语言数据集。为了解决这个问题,我们提供了一个名为EVJVQA的基准数据集,包括约5,000张来自越南的图像上33,000个问题-答案对,涵盖了三种语言:越南语、英语和日语,以评估多语言VQA系统或模型的性能。 EVJVQA作为第9届越南语言和语音处理研讨会(VLSP 2022)视觉问答的多语言基准数据集。本次比赛吸引了62支来自各大学和组织的参赛队伍。在本文中,我们介绍了挑战的组织细节,共享任务参与者采用的方法概述和结果。私有测试集上最高性能为F1得分0.4392和BLUE得分0.4009。排名前两的团队提出的多语言QA系统使用ViT作为预训练视觉模型,使用基于Transformer架构的强大预训练语言模型-mT5。 EVJVQA是一个具有挑战性的数据集,激励NLP和CV研究人员进一步探索视觉问答系统的多语言模型或系统。我们在Codalab评估系统上发布了挑战,以供进一步研究。