Collecting high quality conversational data can be very expensive for most applications and infeasible for others due to privacy, ethical, or similar concerns. A promising direction to tackle this problem is to generate synthetic dialogues by prompting large language models. In this work, we use a small set of expert-written conversations as in-context examples to synthesize a social conversation dataset using prompting. We perform several thorough evaluations of our synthetic conversations compared to human-collected conversations. This includes various dimensions of conversation quality with human evaluation directly on the synthesized conversations, and interactive human evaluation of chatbots fine-tuned on the synthetically generated dataset. We additionally demonstrate that this prompting approach is generalizable to multi-party conversations, providing potential to create new synthetic data for multi-party tasks. Our synthetic multi-party conversations were rated more favorably across all measured dimensions compared to conversation excerpts sampled from a human-collected multi-party dataset.
翻译:收集高质量的谈话数据对于大多数应用来说可能非常昂贵,对于其他人来说可能非常难以收集,因为隐私、道德或类似的关切。解决这一问题的一个有希望的方向是通过促进大型语言模型产生合成对话。在这项工作中,我们使用一小组专家撰写的对话作为内置实例,以综合社会对话数据集。我们用快速方式对合成对话与人类收集的谈话进行了几次彻底评估。这包括对话质量的各个方面,直接通过合成对话进行人文评估,以及人文互动评估聊天机器人,对合成生成的数据集进行微调。我们还表明,这种快速方法对于多党对话是普遍适用的,为多党任务创造新的合成数据提供了潜力。我们合成多党对话在所有衡量的层面中被评为更优于从人类收集的多党数据集中抽取的谈话节录。