We present an unsupervised approach that converts the input speech of any individual into audiovisual streams of potentially-infinitely many output speakers. Our approach builds on simple autoencoders that project out-of-sample data onto the distribution of the training set. We use Exemplar Autoencoders to learn the voice, stylistic prosody, and visual appearance of a specific target exemplar speech. In contrast to existing methods, the proposed approach can be easily extended to an arbitrarily large number of speakers and styles using only 3 minutes of target audio-video data, without requiring {\em any} training data for the input speaker. To do so, we learn audiovisual bottleneck representations that capture the structured linguistic content of speech. We outperform prior approaches on both audio and video synthesis, and provide extensive qualitative analysis on our project page -- https://dunbar12138.github.io/projectpage/Audiovisual/.
翻译:我们提出一种不受监督的方法,将任何个人的投入讲话转换成视听流,其中可能有很多产出演讲者。我们的方法建立在简单的自动编码器的基础上,将超出抽样的数据投射到成套培训的分布中。我们使用Exemplar Autoencoders来学习特定目标演讲的语音、文体运动和视觉外观。与现有方法不同,提议的方法可以很容易地推广到任意大量的发言者和风格,只使用3分钟的目标视听数据,而不需要为输入演讲者提供任何培训数据。为了这样做,我们学习了反映演讲结构语言内容的视听瓶式演示。我们超越了先前在音像合成方面的做法,并对我们的项目网页 -- -- https://dunbar12138.github.io/projectpage/Audiovisial/提供广泛的定性分析。