Present language understanding methods have demonstrated extraordinary ability of recognizing patterns in texts via machine learning. However, existing methods indiscriminately use the recognized patterns in the testing phase that is inherently different from us humans who have counterfactual thinking, e.g., to scrutinize for the hard testing samples. Inspired by this, we propose a Counterfactual Reasoning Model, which mimics the counterfactual thinking by learning from few counterfactual samples. In particular, we devise a generation module to generate representative counterfactual samples for each factual sample, and a retrospective module to retrospect the model prediction by comparing the counterfactual and factual samples. Extensive experiments on sentiment analysis (SA) and natural language inference (NLI) validate the effectiveness of our method.
翻译:现有语言理解方法显示了通过机器学习来识别文本模式的非凡能力,然而,现有方法不加区别地使用试验阶段的公认模式,这种模式与我们这些反事实思维的人类有着内在的差别,例如,检查硬测试样本。受此启发,我们提出了一个反事实理由模型,通过从少数反事实样本中学习来模仿反事实思维。特别是,我们设计了一个代代代模块,为每个事实样本生成具有代表性的反事实样本,以及一个回溯模块,通过比较反事实和事实样本来回顾模型预测。关于情绪分析和自然语言推断的广泛实验证实了我们方法的有效性。