Text autoencoders are commonly used for conditional generation tasks such as style transfer. We propose methods which are plug and play, where any pretrained autoencoder can be used, and only require learning a mapping within the autoencoder's embedding space, training embedding-to-embedding (Emb2Emb). This reduces the need for labeled training data for the task and makes the training procedure more efficient. Crucial to the success of this method is a loss term for keeping the mapped embedding on the manifold of the autoencoder and a mapping which is trained to navigate the manifold by learning offset vectors. Evaluations on style transfer tasks both with and without sequence-to-sequence supervision show that our method performs better than or comparable to strong baselines while being up to four times faster.
翻译:文本自动编码器通常用于诸如样式传输等有条件生成任务 。 我们提出插件和播放方法, 在那里可以使用任何事先训练过的自动编码器, 只需要在自动编码器嵌入空间内学习绘图, 培训嵌入到编组( emb2Emb) 。 这减少了任务所需的标签培训数据, 提高了培训程序的效率 。 这种方法成功的关键在于将绘图嵌入功能保留在自动编码器的元件上, 以及一个通过学习抵消矢量来导航的绘图, 并且只需要学习自动编码器嵌入空间内的映射, 并且只需要在自动编码器嵌入器内学习, 并且不需要序列到序列的测试, 评估显示我们的方法在速度高达四倍的同时, 运行优于或可与强的基线相比。