In this paper, we propose a novel approach for generalized zero-shot learning in a multi-modal setting, where we have novel classes of audio/video during testing that are not seen during training. We use the semantic relatedness of text embeddings as a means for zero-shot learning by aligning audio and video embeddings with the corresponding class label text feature space. Our approach uses a cross-modal decoder and a composite triplet loss. The cross-modal decoder enforces a constraint that the class label text features can be reconstructed from the audio and video embeddings of data points. This helps the audio and video embeddings to move closer to the class label text embedding. The composite triplet loss makes use of the audio, video, and text embeddings. It helps bring the embeddings from the same class closer and push away the embeddings from different classes in a multi-modal setting. This helps the network to perform better on the multi-modal zero-shot learning task. Importantly, our multi-modal zero-shot learning approach works even if a modality is missing at test time. We test our approach on the generalized zero-shot classification and retrieval tasks and show that our approach outperforms other models in the presence of a single modality as well as in the presence of multiple modalities. We validate our approach by comparing it with previous approaches and using various ablations.
翻译:在本文中,我们提出了一种在多模式环境下普遍零点学习的新办法,在多模式环境下,我们拥有在测试期间没有看到的培训中新型的音频/视频类别。我们使用文本嵌入的语义关联性作为零点学习的手段,将音频和视频嵌入与相应的类标签文本属性空间相匹配。我们的方法使用跨模式解码器和复合三重损失。交叉模式解码器强制要求从数据点的音频和视频嵌入中重建类标签文本功能。这帮助音频和视频嵌入更接近类标签嵌入文本。复合三重损失利用了音频、视频和文本嵌入的语义。我们的方法将同一类的嵌入与相应的类的音频和视频嵌入相匹配。我们的方法使用跨模式,将不同类别嵌入的嵌入推出多模式。这有助于网络更好地完成多模式零点学习任务。我们多模式的零点学习方法可以发挥作用,即使一种模式在测试时已经丢失了。我们用一种模式来测试我们以前的版本的版本,也用另一种模式展示了我们以前的零位模式。