To transcribe spoken language to written medium, most alphabets enable an unambiguous sound-to-letter rule. However, some writing systems have distanced themselves from this simple concept and little work exists in Natural Language Processing (NLP) on measuring such distance. In this study, we use an Artificial Neural Network (ANN) model to evaluate the transparency between written words and their pronunciation, hence its name Orthographic Transparency Estimation with an ANN (OTEANN). Based on datasets derived from Wikimedia dictionaries, we trained and tested this model to score the percentage of correct predictions in phoneme-to-grapheme and grapheme-to-phoneme translation tasks. The scores obtained on 17 orthographies were in line with the estimations of other studies. Interestingly, the model also provided insight into typical mistakes made by learners who only consider the phonemic rule in reading and writing.
翻译:将口语转换成书面介质,大多数字母使语音能够实现明确的声字母规则。然而,有些写法系统与这个简单的概念脱节,而自然语言处理(NLP)在测量这种距离方面几乎没有什么工作。在本研究中,我们使用人工神经网络模型来评估书面文字及其发音的透明度,因此,它的名称是“正弦透明与ANN(OTEAN)”的估算。根据从维基媒体字典中得出的数据集,我们培训和测试了这个模型,以在电话对字典和图形对电话翻译任务中得正确预测的百分比。17个或几部的得分与其他研究的估计是一致的。有趣的是,该模型还揭示了在读写中只考虑语音规则的学生的典型错误。