Audio is the main form for the visually impaired to obtain information. In reality, all kinds of visual data always exist, but audio data does not exist in many cases. In order to help the visually impaired people to better perceive the information around them, an image-to-audio-description (I2AD) task is proposed to generate audio descriptions from images in this paper. To complete this totally new task, a modal translation network (MT-Net) from visual to auditory sense is proposed. The proposed MT-Net includes three progressive sub-networks: 1) feature learning, 2) cross-modal mapping, and 3) audio generation. First, the feature learning sub-network aims to learn semantic features from image and audio, including image feature learning and audio feature learning. Second, the cross-modal mapping sub-network transforms the image feature into a cross-modal representation with the same semantic concept as the audio feature. In this way, the correlation of inter-modal data is effectively mined for easing the heterogeneous gap between image and audio. Finally, the audio generation sub-network is designed to generate the audio waveform from the cross-modal representation. The generated audio waveform is interpolated to obtain the corresponding audio file according to the sample frequency. Being the first attempt to explore the I2AD task, three large-scale datasets with plenty of manual audio descriptions are built. Experiments on the datasets verify the feasibility of generating intelligible audio from an image directly and the effectiveness of proposed method.
翻译:听觉是视觉受损者获取信息的主要形式。 在现实中,所有类型的视觉数据总是存在,但在许多情况下并不存在音频数据。 首先,为了帮助视力受损者更好地了解周围的信息,建议用图像到音频描述( I2AD) 任务从本文中的图像生成音频描述。 为了完成这一全新的任务, 提议从视觉到听觉的模型翻译网络( MT- Net ) 。 拟议的MT- Net 包括三个进步的子网络:1 特征学习, 2 跨模式绘图, 3 音频生成。 首先, 功能学习子网络旨在从图像和音频中学习语义特征, 包括图像特征学习和音频特征学习。 其次, 跨模式绘图子网络将图像特征转换成一个跨模式的表达方式, 与音频特性相同。 以这种方式有效地挖掘了跨模式数据的相关性, 以缓解图像和音频之间的差异。 最后, 音频生成子生成子波格式, 从图像和音频图像的直径波变, 将生成一个跨模式, 将生成成一个磁波变模型。