Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a task-oriented dialog system. A key challenge of OOD detection is to learn discriminative semantic features. Traditional cross-entropy loss only focuses on whether a sample is correctly classified, and does not explicitly distinguish the margins between categories. In this paper, we propose a supervised contrastive learning objective to minimize intra-class variance by pulling together in-domain intents belonging to the same class and maximize inter-class variance by pushing apart samples from different classes. Besides, we employ an adversarial augmentation mechanism to obtain pseudo diverse views of a sample in the latent space. Experiments on two public datasets prove the effectiveness of our method capturing discriminative representations for OOD detection.
翻译:在面向任务的对话系统中,检测用户询问的外部意图或未知意图至关重要。 OOD检测的一个关键挑战是学习歧视性的语义特征。传统的跨热带损失仅侧重于样本是否正确分类,而没有明确区分类别之间的边际。在本文中,我们提出了一个监督的对比学习目标,以通过将同一类的部内意图集中在一起,并通过将不同类的样本分开,最大限度地减少不同类之间的差异。此外,我们采用了对抗性增强机制,以获取潜在空间样本的假冒不同观点。对两个公共数据集的实验证明了我们获取OOD检测歧视性表现的方法的有效性。