Traditionally, character-level transduction problems have been solved with finite-state models designed to encode structural and linguistic knowledge of the underlying process, whereas recent approaches rely on the power and flexibility of sequence-to-sequence models with attention. Focusing on the less explored unsupervised learning scenario, we compare the two model classes side by side and find that they tend to make different types of errors even when achieving comparable performance. We analyze the distributions of different error classes using two unsupervised tasks as testbeds: converting informally romanized text into the native script of its language (for Russian, Arabic, and Kannada) and translating between a pair of closely related languages (Serbian and Bosnian). Finally, we investigate how combining finite-state and sequence-to-sequence models at decoding time affects the output quantitatively and qualitatively.
翻译:传统上,用限定国家模式解决了性格转换问题,这些模式旨在对基本过程的结构和语言知识进行编码,而最近的方法则依赖顺序到顺序模型的力量和灵活性,并给予注意。我们注重探索较少的、不受监督的学习方案,对两个模式类进行并肩比较,发现它们往往产生不同类型的错误,即使取得可比的性能。我们用两种不受监督的任务作为测试台分析不同错误类的分布情况:将非正式罗马化文本转换成其语言的本地文字(俄语、阿拉伯语和卡纳达语)和将一对密切相关的语言(塞尔维亚语和波斯尼亚语)翻译。最后,我们调查在解码时将固定状态和序列到顺序模型结合起来如何影响产出的定量和定性。