Text style transfer (TST) without parallel data has achieved some practical success. However, most of the existing unsupervised text style transfer methods suffer from (i) requiring massive amounts of non-parallel data to guide transferring different text styles. (ii) colossal performance degradation when fine-tuning the model in new domains. In this work, we propose DAML-ATM (Domain Adaptive Meta-Learning with Adversarial Transfer Model), which consists of two parts: DAML and ATM. DAML is a domain adaptive meta-learning approach to learn general knowledge in multiple heterogeneous source domains, capable of adapting to new unseen domains with a small amount of data. Moreover, we propose a new unsupervised TST approach Adversarial Transfer Model (ATM), composed of a sequence-to-sequence pre-trained language model and uses adversarial style training for better content preservation and style transfer. Results on multi-domain datasets demonstrate that our approach generalizes well on unseen low-resource domains, achieving state-of-the-art results against ten strong baselines.
翻译:没有平行数据的文本样式传输(TST)取得了一些实际的成功,但是,大多数现有的未经监督的文本样式传输方法都存在以下困难:(一) 需要大量非平行数据来指导不同文本样式的传输;(二) 在新领域微调模型时,巨大的性能退化。在这项工作中,我们提议DAML-ATM(DAML-学习与反向传输模式),由两部分组成:DAML和ATM。DAML是一种在多种不同来源领域学习一般知识的域性适应性元学习方法,能够以少量数据适应新的隐蔽领域。此外,我们提出了一个新的不受监督的TST方法自动转移模式(ATM),由顺序到顺序的预先培训语言模式组成,并使用对抗性风格培训,以更好地保存内容和风格传输。多领域数据集的结果表明,我们的方法在看不见的低资源领域非常普及,在10个强的基线下实现了最先进的成果。