Recently, pre-trained transformer-based models have achieved great success in the task of definition generation (DG). However, previous encoder-decoder models lack effective representation learning to contain full semantic components of the given word, which leads to generating under-specific definitions. To address this problem, we propose a novel contrastive learning method, encouraging the model to capture more detailed semantic representations from the definition sequence encoding. According to both automatic and manual evaluation, the experimental results on three mainstream benchmarks demonstrate that the proposed method could generate more specific and high-quality definitions compared with several state-of-the-art models.
翻译:最近,以培训前变压器为基础的模型在定义生成任务(DG)中取得了巨大成功,然而,以前的编码器-计算器模型缺乏有效的代表性学习,无法包含给定词的全部语义组成部分,从而导致产生具体定义的不足。为了解决这一问题,我们提出了一种新的对比性学习方法,鼓励模型从定义序列编码中获取更详细的语义表述。根据自动和人工评价,三个主流基准的实验结果表明,与几个最先进的模型相比,拟议方法可以产生更具体、更高质量的定义。