Text style transfer has gained increasing attention from the research community over the recent years. However, the proposed approaches vary in many ways, which makes it hard to assess the individual contribution of the model components. In style transfer, the most important component is the optimization technique used to guide the learning in the absence of parallel training data. In this work we empirically compare the dominant optimization paradigms which provide supervision signals during training: backtranslation, adversarial training and reinforcement learning. We find that backtranslation has model-specific limitations, which inhibits training style transfer models. Reinforcement learning shows the best performance gains, while adversarial training, despite its popularity, does not offer an advantage over the latter alternative. In this work we also experiment with Minimum Risk Training, a popular technique in the machine translation community, which, to our knowledge, has not been empirically evaluated in the task of style transfer. We fill this research gap and empirically show its efficacy.
翻译:近年来,研究界越来越重视文字风格的转让,但所提议的方法在许多方面各不相同,难以评估模型组成部分的个别贡献。在风格转让方面,最重要的组成部分是在没有平行培训数据的情况下指导学习的优化技术。在这项工作中,我们从经验上比较了在培训期间提供监督信号的主要优化模式:反翻译、对抗培训和强化学习。我们发现,反译有具体模式的局限性,抑制了培训风格转移模式。强化学习显示了最佳的绩效收益,而对抗性培训尽管受到欢迎,但并没有给后者带来优势。在这项工作中,我们还试验了最低风险培训,这是机器翻译界的一种流行技术。 据我们所知,在风格转让任务中,这种技术没有经过经验评估。我们填补了这一研究差距,并用经验显示了其效率。