Sanskrit Word Segmentation (SWS) is essential in making digitized texts available and in deploying downstream tasks. It is, however, non-trivial because of the sandhi phenomenon that modifies the characters at the word boundaries, and needs special treatment. Existing lexicon driven approaches for SWS make use of Sanskrit Heritage Reader, a lexicon-driven shallow parser, to generate the complete candidate solution space, over which various methods are applied to produce the most valid solution. However, these approaches fail while encountering out-of-vocabulary tokens. On the other hand, purely engineering methods for SWS have made use of recent advances in deep learning, but cannot make use of the latent word information on availability. To mitigate the shortcomings of both families of approaches, we propose Transformer based Linguistically Informed Sanskrit Tokenizer (TransLIST) consisting of (1) a module that encodes the character input along with latent-word information, which takes into account the sandhi phenomenon specific to SWS and is apt to work with partial or no candidate solutions, (2) a novel soft-masked attention to prioritize potential candidate words and (3) a novel path ranking algorithm to rectify the corrupted predictions. Experiments on the benchmark datasets for SWS show that TransLIST outperforms the current state-of-the-art system by an average 7.2 points absolute gain in terms of perfect match (PM) metric. The codebase and datasets are publicly available at https://github.com/rsingha108/TransLIST
翻译:梵文分割法(SWS)对于提供数字化文本和部署下游任务至关重要,但是,由于沙沙现象改变了字词边界上的字符,并且需要特殊处理,这些方法并非三重性。 SWS现有的词汇驱动方法利用梵文遗产阅读器(一个以字典驱动的浅色分析器)来生成完整的候选解决方案空间,在其中应用各种方法来生成最有效的解决方案。然而,这些方法在遇到校外标牌时失败。另一方面,SWS的纯工程方法在深层次学习中利用了最近的进展,但无法使用关于可用性的隐含词信息。为了减轻这两种方法的缺陷,我们提议基于语言知情的变异器 Sanskrit Tokenizer (TransLIST), 包括 (1) 将字符输入与潜值语言信息编码的模块,该模块考虑到SWSSS系统所特有的沙教义现象,并适应部分或无候选解决方案的工作。 (2) 在SWSWS的绝对值数据库中,以新软式软式的注意力来优先排序候选人的当前标准版本数据排序。