Multimodal intent understanding is a significant research area that requires effectively leveraging multiple modalities to analyze human language. Existing methods face two main challenges in this domain. Firstly, they have limitations in capturing nuanced and high-level semantics underlying complex in-distribution (ID) multimodal intents. Secondly, they exhibit poor generalization when confronted with unseen out-of-distribution (OOD) data in real-world scenarios. To address these issues, we propose a novel method for both ID classification and OOD detection (MIntOOD). We first introduce a weighted feature fusion network that models multimodal representations effectively. This network dynamically learns the importance of each modality, adapting to multimodal contexts. To develop discriminative representations that are conducive to both tasks, we synthesize pseudo-OOD data from convex combinations of ID data and engage in multimodal representation learning from both coarse-grained and fine-grained perspectives. The coarse-grained perspective focuses on distinguishing between ID and OOD binary classes, while the fine-grained perspective enhances the understanding of ID data by incorporating binary confidence scores. These scores help to gauge the difficulty of each sample, improving the classification of different ID classes. Additionally, the fine-grained perspective captures instance-level interactions between ID and OOD samples, promoting proximity among similar instances and separation from dissimilar ones. We establish baselines for three multimodal intent datasets and build an OOD benchmark. Extensive experiments on these datasets demonstrate that our method significantly improves OOD detection performance with a 3-10% increase in AUROC scores while achieving new state-of-the-art results in ID classification. The full data and codes are available at https://github.com/thuiar/MIntOOD.
翻译:暂无翻译