The literature on aspect-based sentiment analysis (ABSA) has been overwhelmed by deep neural networks, yielding state-of-the-art results for ABSA. However, these deep models are susceptible to learning spurious correlations between input features and output labels, which in general suffer from poor robustness and generalization. In this paper, we propose a novel Contrastive Variational Information Bottleneck framework (called CVIB) to reduce spurious correlations for ABSA. The proposed CVIB framework is composed of an original network and a self-pruned network, and these two networks are optimized simultaneously via contrastive learning. Concretely, we employ the Variational Information Bottleneck (VIB) principle to learn an informative and compressed network (self-pruned network) from the original network, which discards the superfluous patterns or spurious correlations between input features and prediction labels. Then, self-pruning contrastive learning is devised to pull together semantically similar positive pairs and push away dissimilar pairs, where the representations of the anchor learned by the original and self-pruned networks respectively are regarded as a positive pair while the representations of two different sentences within a mini-batch are treated as a negative pair. Extensive experiments on five benchmark ABSA datasets demonstrate that our CVIB method achieves better performance than the strong competitors in terms of overall prediction performance, robustness, and generalization.
翻译:有关基于侧面情绪分析(ABSA)的文献已经被深层神经网络所淹没,这为ABSA带来了最先进的成果。然而,这些深层模型很容易了解投入特征和产出标签之间的虚假关联,一般而言,它们缺乏稳健性和概括性。在本文中,我们提议建立一个新的反相向变异信息瓶式框架(称为CVIB),以减少ABSA的虚假关联。拟议的CVIB框架由原始网络和自我操纵网络组成,这两个网络通过对比性学习同时得到优化。具体地说,我们采用Variational Information Bottleneck(VIB)原则从原始网络中学习信息化和压缩网络(自我升级网络),从而放弃投入特征和预测标签之间的多余模式或模糊性关联。然后,自冲反向反向反向的学习旨在将精准的正对配和不相近的对配组合结合在一起,同时通过对比性学习,通过对比性学习来优化这两个网络,从原始的和自我定位数据库中学习的固定的定位,同时将原始和自我定位的网络的基底的成绩看,在内部的成绩模型中,分别是正面的基底的成绩模型的基的基的成绩的基底的基座,在C的基底的基底的成绩图图图图的对的对的基的对的对的对的对的对的对。</s>