We introduce \texttt{N-LTP}, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: {lexical analysis} (Chinese word segmentation, part-of-speech tagging, and named entity recognition), {syntactic parsing} (dependency parsing), and {semantic parsing} (semantic dependency parsing and semantic role labeling). Unlike the existing state-of-the-art toolkits, such as \texttt{Stanza}, that adopt an independent model for each task, \texttt{N-LTP} adopts the multi-task framework by using a shared pre-trained model, which has the advantage of capturing the shared knowledge across relevant Chinese tasks. In addition, a knowledge distillation method \cite{DBLP:journals/corr/abs-1907-04829} where the single-task model teaches the multi-task model is further introduced to encourage the multi-task model to surpass its single-task teacher. Finally, we provide a collection of easy-to-use APIs and a visualization tool to make users to use and view the processing results more easily and directly. To the best of our knowledge, this is the first toolkit to support six Chinese NLP fundamental tasks. Source code, documentation, and pre-trained models are available at \url{https://github.com/HIT-SCIR/ltp}.
翻译:我们引入了\ textt{N-LTP},这是一个支持六种基本中国NLP任务的开放源码神经语言技术平台,例如\ textt{Stanza},它为每一项任务采用独立的模型,\ textt{N-LTP}(中文文字分割、部分语音标记和名称实体识别),{合成分析}(依赖性剖析)和{语义剖析}(语义剖析和语义作用标签)。不同于现有的最先进的工具箱,例如\ textt{Stanza},它为每项任务采用独立的模型,\ textt{N-LTP}(中文文字分割、部分语音标签标签和名称实体识别),它采用多任务框架,使用共同的预培训模式,具有在中国相关任务中获取共享知识的优势。此外,知识蒸馏方法{DBLP:journals/corr/abs-1907-04829}, 单项任务模型正在进一步引入多功能模型,以鼓励多任务模型,多功能支持多功能模型, 超越其基础的多功能模型,\ tatlettlt{Rtlet{N-deal-deal 和视觉处理。最后,我们提供了最容易的Att- train- train-to to to to to smaus to to to kmaildal to make the sweadal to sheal to sheal to supal to smausal to smausal to sal to supal to supal