As it is cumbersome and expensive to acquire a huge amount of data for training neural dialog models, data augmentation is proposed to effectively utilize existing training samples. However, current data augmentation techniques on the dialog generation task mostly augment all cases in the training dataset without considering the intrinsic attributes between different cases. We argue that not all cases are beneficial for augmentation task, and the cases suitable for augmentation should obey the following two attributes: (1) low-quality (the dialog model cannot generate a high-quality response for the case), (2) representative (the case should represent the property of the whole dataset). Herein, we explore this idea by proposing a Selective Data Augmentation framework (SDA) for the response generation task. SDA employs a dual adversarial network to select the lowest quality and most representative data points for augmentation in one stage. Extensive experiments conducted on two publicly available datasets, i.e., DailyDialog and OpenSubtitles, show that our framework can improve the response generation performance with respect to various metrics.
翻译:针对神经对话模型训练需要大量数据的限制和高昂的成本,提出了数据增强的方法以有效利用现有的训练样本。然而,目前在对话生成任务上的数据增强技术大多是针对训练数据集中所有情况进行增强而忽略了不同情况之间的内在属性。我们认为,并非所有情况对增强任务都有益,适合增强的情况应该遵守以下两个属性:(1)低质量(对话模型不能为该情况生成高质量的响应),(2)代表性(该情况应表示整个数据集的属性)。
因此,我们提出了一种选择性数据增强框架(SDA)用于响应生成任务。SDA使用双重对抗网络在一个阶段内选择最低质量和最具代表性的数据点进行增强。在两个公开数据集DailyDialog和OpenSubtitles上进行的广泛实验表明,我们的框架可以改善响应生成性能,提高各种指标。