In this study, we developed the first baseline readability model for the Cebuano language. Cebuano is the second most-used native language in the Philippines with about 27.5 million speakers. As the baseline, we extracted traditional or surface-based features, syllable patterns based from Cebuano's documented orthography, and neural embeddings from the multilingual BERT model. Results show that the use of the first two handcrafted linguistic features obtained the best performance trained on an optimized Random Forest model with approximately 87% across all metrics. The feature sets and algorithm used also is similar to previous results in readability assessment for the Filipino language showing potential of crosslingual application. To encourage more work for readability assessment in Philippine languages such as Cebuano, we open-sourced both code and data.
翻译:在这项研究中,我们为Cebuano语言开发了第一个基线可读性模型。 Cebuano是菲律宾第二大使用量最大的土著语言,约有2 750万人使用。作为基线,我们从Cebuano的有文件记录的正文学和多语言BERT模型中提取了传统或地表特征、基于Cebuano的音频模式和神经嵌入。结果显示,前两个手工艺语言特征的使用获得了最佳随机森林模型培训的最佳性能,在所有指标中,该模型大约占87%。使用的特征组和算法也与以往菲律宾语可读性评估的结果相似,这些结果显示了跨语言应用的潜力。鼓励以Cebuano、我们开源代码和数据等菲律宾语开展更多的可读性评估工作。