We introduce a video framework for modeling the association between verbal and non-verbal communication during dyadic conversation. Given the input speech of a speaker, our approach retrieves a video of a listener, who has facial expressions that would be socially appropriate given the context. Our approach further allows the listener to be conditioned on their own goals, personalities, or backgrounds. Our approach models conversations through a composition of large language models and vision-language models, creating internal representations that are interpretable and controllable. To study multimodal communication, we propose a new video dataset of unscripted conversations covering diverse topics and demographics. Experiments and visualizations show our approach is able to output listeners that are significantly more socially appropriate than baselines. However, many challenges remain, and we release our dataset publicly to spur further progress. See our website for video results, data, and code: https://realtalk.cs.columbia.edu.
翻译:我们引入了一个视频框架,用于模拟在dyadic对话中口头和非口头交流之间的关联。根据一名发言者的投入演讲,我们的方法取回了一名听众的视频,该听众的面部表情在背景上是适合社会的。我们的方法进一步允许听众以自己的目标、个性或背景为条件。我们的方法模式对话由大型语言模型和视觉语言模型组成,创造了可以解释和控制的内部代表。为了研究多式交流,我们提议了一套新的视频数据集,包括未描述的不同主题和人口结构的对话。实验和可视化表明,我们的方法能够输出出比基线更适合社会的听众。然而,许多挑战依然存在,我们公开发布我们的数据,以刺激进一步的进展。我们关于视频结果、数据和代码的网页:https://realtalk.cs.columbia.edu。