Sanskrit is a classical language with about 30 million extant manuscripts fit for digitisation, available in written, printed or scannedimage forms. However, it is still considered to be a low-resource language when it comes to available digital resources. In this work, we release a post-OCR text correction dataset containing around 218,000 sentences, with 1.5 million words, from 30 different books. Texts in Sanskrit are known to be diverse in terms of their linguistic and stylistic usage since Sanskrit was the 'lingua franca' for discourse in the Indian subcontinent for about 3 millennia. Keeping this in mind, we release a multi-domain dataset, from areas as diverse as astronomy, medicine and mathematics, with some of them as old as 18 centuries. Further, we release multiple strong baselines as benchmarks for the task, based on pre-trained Seq2Seq language models. We find that our best-performing model, consisting of byte level tokenization in conjunction with phonetic encoding (Byt5+SLP1), yields a 23% point increase over the OCR output in terms of word and character error rates. Moreover, we perform extensive experiments in evaluating these models on their performance and analyse common causes of mispredictions both at the graphemic and lexical levels. Our code and dataset is publicly available at https://github.com/ayushbits/pe-ocr-sanskrit.
翻译:梵文是一种古典语言,有大约3 000万份现有手稿,适合数字化,以书面、印刷或扫描形式提供。然而,在现有的数字资源方面,它仍被视为一种低资源语言。在这项工作中,我们发行了一个包含大约218 000个句子的OCR后文本校正数据集,其中有30种不同书籍中的150万个单词。据了解,梵文的文字在语言和文体使用方面是多种多样的,因为梵文是印度次大陆3千年左右话语中的“franca语”。牢记这一点,我们从天文学、医学和数学等多种领域发行了一个多义数据集,有些领域已经老化为18个世纪。此外,我们根据预先培训的Sseq2Seqy语言模型,发布了多项强有力的基准,作为这项任务的基准。我们发现,我们最优秀的模型,即字级标识,与音调编码(Byt5+SLP1)结合,在大约3千年的时间里。我们从文字和性格错误率两方面对OCR输出量增加了23%。我们在通用/regilsmablial 数据模型上进行广泛的分析。我们在通用/regialexexexexalexalexalalexalex ex ex exexexexexexalalalalalexexexexexexexexalalalalalexalsalex ex ex。