Existing technologies expand BERT from different perspectives, e.g. designing different pre-training tasks, different semantic granularities and different model architectures. Few models consider expanding BERT from different text formats. In this paper, we propose a heterogeneous knowledge language model (HKLM), a unified pre-trained language model (PLM) for all forms of text, including unstructured text, semi-structured text and well-structured text. To capture the corresponding relations among these multi-format knowledge, our approach uses masked language model objective to learn word knowledge, uses triple classification objective and title matching objective to learn entity knowledge and topic knowledge respectively. To obtain the aforementioned multi-format text, we construct a corpus in the tourism domain and conduct experiments on 5 tourism NLP datasets. The results show that our approach outperforms the pre-training of plain text using only 1/4 of the data. The code, datasets, corpus and knowledge graph will be released.
翻译:现有技术从不同角度扩大了生物、技术与技术的扩展,例如设计不同的培训前任务、不同的语义颗粒和不同的模型结构。很少有模型考虑从不同的文本格式中扩大生物、生物、生物、生物、生物和毒素研究。在本文件中,我们建议为所有形式的文本,包括非结构化文本、半结构化文本和结构完善的文本,建立一个多样化的知识语言模型(HKLM),一个统一的预先培训语言模型(PLM)。为了捕捉这些多格式知识之间的对应关系,我们的方法使用蒙面语言模型来学习文字知识,使用三重分类目标和标题匹配目标来分别学习实体知识和专题知识。为了获得上述多格式文本,我们在旅游领域建立一套材料,对5个旅游NLP数据集进行实验。结果显示,我们的方法仅使用1/4的数据,就超越了对纯文本的预培训。代码、数据集、材料和知识图表将予公布。