Image captioning has increasingly large domains of application, and fashion is not an exception. Having automatic item descriptions is of great interest for fashion web platforms hosting sometimes hundreds of thousands of images. This paper is one of the first tackling image captioning for fashion images. To contribute addressing dataset diversity issues, we introduced the InFashAIv1 dataset containing almost 16.000 African fashion item images with their titles, prices and general descriptions. We also used the well known DeepFashion dataset in addition to InFashAIv1. Captions are generated using the \textit{Show and Tell} model made of CNN encoder and RNN Decoder. We showed that jointly training the model on both datasets improves captions quality for African style fashion images, suggesting a transfer learning from Western style data. The InFashAIv1 dataset is released on \href{https://github.com/hgilles06/infashai}{Github} to encourage works with more diversity inclusion.
翻译:图像字幕的应用领域越来越大, 时尚也是一种例外。 自动项目描述对于时装网络平台有时有数十万张图像托管非常感兴趣。 本文是首个针对时装图像的图像字幕。 为了帮助解决数据集多样性问题, 我们引入了 InFashAIv1 数据集, 包含近16000个非洲时装项目图片及其标题、 价格和一般描述。 除了 InFashAIv.1 外, 我们还使用众所周知的 DeepFashion数据集。 标题是使用CNN encoder 和 RNNN Decoder 制作的\ textit{Show and Tell} 模型生成的。 我们显示, 联合培训这两个数据集的模型可以提高非洲时装图像的字幕质量, 建议从西方时装数据中进行传输学习。 InFashAIv1 数据集在\fs://github. com/ higilles06/infashai ⁇ Github} 上发布, 鼓励与更多多样性融合合作。