Conventional text style transfer approaches for natural language focus on sentence-level style transfer without considering contextual information, and the style is described with attributes (e.g., formality). When applying style transfer on conversations such as task-oriented dialogues, existing approaches suffer from these limitations as context can play an important role and the style attributes are often difficult to define in conversations. In this paper, we introduce conversation style transfer as a few-shot learning problem, where the model learns to perform style transfer by observing only the target-style dialogue examples. We propose a novel in-context learning approach to solve the task with style-free dialogues as a pivot. Human evaluation shows that by incorporating multi-turn context, the model is able to match the target style while having better appropriateness and semantic correctness compared to utterance-level style transfer. Additionally, we show that conversation style transfer can also benefit downstream tasks. Results on multi-domain intent classification tasks show improvement in F1 scores after transferring the style of training data to match the style of test data.
翻译:自然语言的常规文本样式传输方法侧重于句级风格传输,不考虑背景信息,并且用属性描述风格(如形式)。在应用风格传输时,如任务导向对话等对话,现有方法受到这些限制,因为上下文可以发挥重要作用,而样式属性在对话中往往难以定义。在本文中,我们将对话样式传输作为微小的学习问题,模型只通过观察目标风格对话实例学习进行风格传输。我们建议采用新颖的文本内学习方法,用无风格对话作为主轴来解决任务。人类评估显示,通过采用多方向环境,模型能够匹配目标样式,同时与全方位风格传输相比更合适和语义正确。此外,我们显示,对话样式传输也可以有益于下游任务。多主题目的分类任务的结果显示,在将培训数据样式转换为测试数据样式之后,F1分的成绩有所改进。