HTR models development has become a conventional step for digital humanities projects. The performance of these models, often quite high, relies on manual transcription and numerous handwritten documents. Although the method has proven successful for Latin scripts, a similar amount of data is not yet achievable for scripts considered poorly-endowed, like Arabic scripts. In that respect, we are introducing and assessing a new modus operandi for HTR models development and fine-tuning dedicated to the Arabic Maghrib{\=i} scripts. The comparison between several state-of-the-art HTR demonstrates the relevance of a word-based neural approach specialized for Arabic, capable to achieve an error rate below 5% with only 10 pages manually transcribed. These results open new perspectives for Arabic scripts processing and more generally for poorly-endowed languages processing. This research is part of the development of RASAM dataset in partnership with the GIS MOMM and the BULAC.
翻译:HTR模型的开发已成为数字人文项目的传统步骤,这些模型的性能往往相当高,依赖于人工抄录和大量手写文件。虽然这种方法已证明对拉丁文字来说是成功的,但对于被认为内容较差的文字,如阿拉伯文字,仍然无法实现类似数量的数据。在这方面,我们正在引入和评估HTR模型开发和微调的新工作方式,专门用于阿拉伯马格里布伊文脚本。一些最先进的HTR模型的比较表明,专门为阿拉伯语设计的单词神经学方法具有相关性,能够达到5%以下的错误率,只有10页手工转录。这些结果为阿拉伯文字处理打开了新的视角,更一般地说来,为不良语言处理打开了新的视角。这一研究是与GIS MOM和BULAC合作开发RAAM数据集的一部分。