We consider and propose a new problem of retrieving audio files relevant to multimodal design document inputs comprising both textual elements and visual imagery, e.g., birthday/greeting cards. In addition to enhancing user experience, integrating audio that matches the theme/style of these inputs also helps improve the accessibility of these documents (e.g., visually impaired people can listen to the audio instead). While recent work in audio retrieval exists, these methods and datasets are targeted explicitly towards natural images. However, our problem considers multimodal design documents (created by users using creative software) substantially different from a naturally clicked photograph. To this end, our first contribution is collecting and curating a new large-scale dataset called Melodic-Design (or MELON), comprising design documents representing various styles, themes, templates, illustrations, etc., paired with music audio. Given our paired image-text-audio dataset, our next contribution is a novel multimodal cross-attention audio retrieval (MMCAR) algorithm that enables training neural networks to learn a common shared feature space across image, text, and audio dimensions. We use these learned features to demonstrate that our method outperforms existing state-of-the-art methods and produce a new reference benchmark for the research community on our new dataset.
翻译:我们考虑并提出一个新问题,即重新获取与多式设计文件投入有关的视听文件,包括文字元素和视觉图像,例如生日/感官卡。除了提高用户经验外,整合与这些投入的主题/风格相匹配的音频也有助于改善这些文件的可获取性(例如,视障人士可以监听音频代替音频)。虽然最近有声频检索工作,但这些方法和数据集明确针对自然图像。然而,我们的问题考虑的是多式联运设计文件(由用户使用创造性软件创建的)与自然点击的照片大不相同。为此,我们的第一个贡献是收集和整理一个新的大型数据集,称为Melodic-Deleter(或MelON),由代表各种风格、主题、模板、插图等的设计文件组成,与音乐音频相配。鉴于我们的配对图像-文字-音频数据集,我们的下一个贡献是新型的多式跨式音频检索(MMCAR)算法,使培训神经网络能够在图像文本、音频文本和音频维度之间学习共同的特征空间。我们用这些新研究功能来展示我们现有数据的方法。我们用新的研究方式制作新的基准模型。</s>