Recent advancements in deep learning has created a lot of opportunities to solve those real world problems which remained unsolved for more than a decade. Automatic caption generation is a major research field, and research community has done a lot of work on this problem on most common languages like English. Urdu is the national language of Pakistan and also much spoken and understood in the sub-continent region of Pakistan-India, and yet no work has been done for Urdu language caption generation. Our research aims to fill this gap by developing an attention-based deep learning model using techniques of sequence modelling specialized for Urdu language. We have prepared a dataset in Urdu language by translating a subset of "Flickr8k" dataset containing 700 'man' images. We evaluate our proposed technique on this dataset and show that it is able to achieve a BLEU score of 0.83 on Urdu language. We improve on the previously proposed techniques by using better CNN architectures and optimization techniques. Furthermore, we also tried adding a grammar loss to the model in order to make the predictions grammatically correct.
翻译:最近深层次学习的进步创造了许多机会,以解决十多年来仍未解决的真正世界问题。 自动字幕生成是一个主要的研究领域,研究界在英语等大多数常见语言上做了大量工作。 乌尔都语是巴基斯坦的国语,也是巴基斯坦-印度次大陆地区的多语和多语种,但尚未为乌尔都语字幕生成工作做任何工作。 我们的研究旨在通过使用乌尔都语专用序列建模技术开发基于关注的深层次学习模型来填补这一空白。 我们用乌尔都语制作了一个数据集,翻译了包含700张“man”图像的“Flickr8k”数据集。 我们评估了我们在这个数据集上的拟议技术,并表明它能够达到0.83的乌尔都语BLEU分数。 我们通过使用更好的CNN架构和优化技术改进了先前提出的技术。 此外,我们还试图为模型增加一个语法损失,以使预测准确无误。