Existing Chinese text error detection mainly focuses on spelling and simple grammatical errors. These errors have been studied extensively and are relatively simple for humans. On the contrary, Chinese semantic errors are understudied and more complex that humans cannot easily recognize. The task of this paper is Chinese Semantic Error Recognition (CSER), a binary classification task to determine whether a sentence contains semantic errors. The current research has no effective method to solve this task. In this paper, we inherit the model structure of BERT and design several syntax-related pre-training tasks so that the model can learn syntactic knowledge. Our pre-training tasks consider both the directionality of the dependency structure and the diversity of the dependency relationship. Due to the lack of a published dataset for CSER, we build a high-quality dataset for CSER for the first time named Corpus of Chinese Linguistic Semantic Acceptability (CoCLSA). The experimental results on the CoCLSA show that our methods outperform universal pre-trained models and syntax-infused models.
翻译:现有的中文文本错误检测主要侧重于拼写和简单的语法错误。 这些错误已经进行了广泛的研究,对人类来说相对比较简单。 相反,中国语义错误研究不足,而且更加复杂,人类无法轻易识别。 本文的任务是中国语义错误识别(CSER),这是一个二进制分类任务,用于确定一个句子是否包含语义错误。 目前的研究没有有效的方法来解决这个问题。 在本文中, 我们继承了BERT的模型结构, 并设计了一些与语法相关的培训前任务, 以便模型能够学习合成学知识。 我们的训练前任务既考虑到依赖性结构的方向性,又考虑到依赖性关系的多样性。由于缺乏已公布的CSER数据集,我们为CSER首次命名的中文语言语言可接受性Corpus(COCLSA)建立了一个高质量的数据集。 COCLSA的实验结果显示, 我们的方法超越了通用的预先培训模式和合成型号。