Annotation projection is an important area in NLP that can greatly contribute to creating language resources for low-resource languages. Word alignment plays a key role in this setting. However, most of the existing word alignment methods are designed for a high resource setting in machine translation where millions of parallel sentences are available. This amount reduces to a few thousands of sentences when dealing with low-resource languages failing the existing established IBM models. In this paper, we propose subword sampling-based alignment of text units. This method's hypothesis is that the aggregation of different granularities of text for certain language pairs can help word-level alignment. For certain languages for which gold-standard alignments exist, we propose an iterative Bayesian optimization framework to optimize selecting possible subwords from the space of possible subword representations of the source and target sentences. We show that the subword sampling method consistently outperforms word-level alignment on six language pairs: English-German, English-French, English-Romanian, English-Persian, English-Hindi, and English-Inuktitut. In addition, we show that the hyperparameters learned for certain language pairs can be applied to other languages at no supervision and consistently improve the alignment results. We observe that using $5K$ parallel sentences together with our proposed subword sampling approach, we obtain similar F1 scores to the use of $100K$'s of parallel sentences in existing word-level fast-align/eflomal alignment methods.


翻译:在NLP中,说明性预测是一个重要领域,可以极大地促进为低资源语言创建语言资源。字对齐在这种背景下起着关键作用。但是,大多数现有的字对齐方法是设计在机器翻译中高资源设置的,那里有数百万个平行的句子。在处理现有IBM模式不完善的低资源语言时,这个数额减少到几千个句子。在本文中,我们提议对文本单位进行小字抽样校正对。这个方法的假设是,对某些语言配对的不同文本的聚合块块有助于字级对齐。对于存在黄金标准对齐的某些语言,我们建议一个迭代代巴伊西亚优化框架,以优化在来源和目标句可能的子字表表达空间中选择可能的子字。我们表明,小字抽样方法始终比六对语言的字校对一致:英语-德语、英语-法语、英语-罗马尼亚语、英语-波斯语、英语-印度语、英语-印度语-印度语和英语-印度语-印度语。此外,我们表明,在使用类似语言的抽样方法中,我们不连续地对某种语言进行观测。

0
下载
关闭预览

相关内容

【论文笔记】通俗理解少样本文本分类 (Few-Shot Text Classification) (1)
深度学习自然语言处理
7+阅读 · 2020年4月8日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
17+阅读 · 2018年12月24日
xgboost特征选择
数据挖掘入门与实战
39+阅读 · 2017年10月5日
Arxiv
4+阅读 · 2018年9月6日
Arxiv
5+阅读 · 2017年11月30日
VIP会员
相关VIP内容
Top
微信扫码咨询专知VIP会员