The continuous increase in the use of social media and the visual content on the internet have accelerated the research in computer vision field in general and the image captioning task in specific. The process of generating a caption that best describes an image is a useful task for various applications such as it can be used in image indexing and as a hearing aid for the visually impaired. In recent years, the image captioning task has witnessed remarkable advances regarding both datasets and architectures, and as a result, the captioning quality has reached an astounding performance. However, the majority of these advances especially in datasets are targeted for English, which left other languages such as Arabic lagging behind. Although Arabic language, being spoken by more than 450 million people and being the most growing language on the internet, lacks the fundamental pillars it needs to advance its image captioning research, such as benchmarks or unified datasets. This works is an attempt to expedite the synergy in this task by providing unified datasets and benchmarks, while also exploring methods and techniques that could enhance the performance of Arabic image captioning. The use of multi-task learning is explored, alongside exploring various word representations and different features. The results showed that the use of multi-task learning and pre-trained word embeddings noticeably enhanced the quality of image captioning, however the presented results shows that Arabic captioning still lags behind when compared to the English language. The used dataset and code are available at this link.
翻译:互联网上社交媒体和视觉内容的使用持续增加,加快了计算机视觉领域总体研究和图像说明任务的具体具体任务。制作最能描述图像的字幕的过程是各种应用的有用任务,例如,它可用于图像索引编制和作为视力受损者的助听器。近年来,图像说明任务在数据集和结构方面都取得了显著进展,结果,字幕质量达到了惊人的性能。然而,这些进步,特别是数据集方面的进步,大部分是针对英语的,使阿拉伯语等其他语言落在后面。虽然阿拉伯语是4.5亿多人的口语,是互联网上增长最快的语言,但缺乏推进其图像说明研究所需的基本支柱,例如基准或统一的数据集。这项工作试图通过提供统一的数据集和基准,加快这项任务的协同作用,同时探索能够提高阿拉伯图像说明性能的方法和技术。多任务学习的用途是探索,同时探索各种文字表述和不同语言的后端链接,同时探索各种文字解释的后方程,同时展示了多种语言的后方程。在学习前方言后程时,还展示了多种语言的后程。