We present an unsupervised word segmentation model, in which the learning objective is to maximize the generation probability of a sentence given its all possible segmentation. Such generation probability can be factorized into the likelihood of each possible segment given the context in a recursive way. In order to better capture the long- and short-term dependencies, we propose to use bi-directional neural language models to better capture the features of segment's context. Two decoding algorithms are also described to combine the context features from both directions to generate the final segmentation, which helps to reconcile word boundary ambiguities. Experimental results showed that our context-sensitive unsupervised segmentation model achieved state-of-the-art at different evaluation settings on various data sets for Chinese, and the comparable result for Thai.
翻译:我们提出了一个不受监督的单词分割模型, 学习的目的是根据一个句子的所有可能的分割方式,使生成概率最大化, 这种生成概率可以以递转的方式, 计入每个可能的段段的可能性。 为了更好地捕捉长期和短期依赖性, 我们建议使用双向神经语言模型, 更好地捕捉段段段背景的特征。 两种解码算法还被描述为将两个方向的上下文特征结合起来, 以产生最终的分割, 这有助于调和单词界限的模糊性。 实验结果显示, 在不同的评估环境中, 我们的环境敏感且不受监督的分割模式在中国各种数据集上达到了最新水平, 以及泰国的可比结果 。