We introduce N-LTP, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: lexical analysis (Chinese word segmentation, part-of-speech tagging, and named entity recognition), {syntactic parsing} (dependency parsing), and {semantic parsing} (semantic dependency parsing and semantic role labeling). Unlike the existing state-of-the-art toolkits, such as Stanza, that adopt an independent model for each task, N-LTP adopts the multi-task framework by using a shared pre-trained model, which has the advantage of capturing the shared knowledge across relevant Chinese tasks. In addition, knowledge distillation where the single-task model teaches the multi-task model is further introduced to encourage the multi-task model to surpass its single-task teacher. Finally, we provide a collection of easy-to-use APIs and a visualization tool to make users easier to use and view the processing results directly. To the best of our knowledge, this is the first toolkit to support six Chinese NLP fundamental tasks. Source code, documentation, and pre-trained models are available at \url{https://ltp.ai/}.
翻译:我们引入了N-LTP,这是一个开放源码神经语言技术平台,支持六种中国NLP基本任务:词汇分析(中文单词分割、部分语音标签和名称实体识别)、{合成分析}(依赖性分类)和{语系分析}(语系依赖性分类和语义作用标签)。与现有的最先进的工具包不同,如Stanza,它为每项任务采用独立模式,N-LTP采用多任务框架,采用共同的预培训模式,具有在中国相关任务中获取共享知识的优势。此外,在单式任务模型教授多任务模型时,进一步引入了知识蒸馏,鼓励多任务模型超越其单一任务教师。最后,我们提供了一套方便用户使用的APIS和直观化工具,使用户更容易使用并直接查看处理结果。对于我们的知识而言,这是第一个支持六种中国版本/NLD基本文件的工具包。