Contrastive learning has achieved impressive success in generation tasks to militate the "exposure bias" problem and discriminatively exploit the different quality of references. Existing works mostly focus on contrastive learning on the instance-level without discriminating the contribution of each word, while keywords are the gist of the text and dominant the constrained mapping relationships. Hence, in this work, we propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text. Concretely, we first propose a keyword graph via contrastive correlations of positive-negative pairs to iteratively polish the keyword representations. Then, we construct intra-contrasts within instance-level and keyword-level, where we assume words are sampled nodes from a sentence distribution. Finally, to bridge the gap between independent contrast levels and tackle the common contrast vanishing problem, we propose an inter-contrast mechanism that measures the discrepancy between contrastive keyword nodes respectively to the instance distribution. Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks.
翻译:对比性学习在生成任务中取得了令人印象深刻的成功,从而缓解了“接触偏差”问题,并有区别地利用了不同质量的参考。现有的作品主要侧重于在实例一级进行对比性学习,而没有区别每个词的贡献,而关键词则是文字的亮点,是受限制的绘图关系的主导。因此,在这项工作中,我们提议了一个分级对比性学习机制,可以将输入文本中的混合颗粒语义含义统一起来。具体地说,我们首先通过正反对对对面的对比关系来提议一个关键词图表,以迭接地擦亮关键词的演示。然后,我们在实例和关键词一级内建造了内部的连接点,我们假设的文字是从一个句号分布中抽样的节点。最后,为了缩小独立对比水平之间的差距,解决常见的对比性消失问题,我们提议了一个交错机制,以测量对比性关键词节点与实例分布之间的差异。实验表明,我们的模型在参数、对话生成和叙事任务上超越了竞争性基线。