We present READ Avatars, a 3D-based approach for generating 2D avatars that are driven by audio input with direct and granular control over the emotion. Previous methods are unable to achieve realistic animation due to the many-to-many nature of audio to expression mappings. We alleviate this issue by introducing an adversarial loss in the audio-to-expression generation process. This removes the smoothing effect of regression-based models and helps to improve the realism and expressiveness of the generated avatars. We note furthermore, that audio should be directly utilized when generating mouth interiors and that other 3D-based methods do not attempt this. We address this with audio-conditioned neural textures, which are resolution-independent. To evaluate the performance of our method, we perform quantitative and qualitative experiments, including a user study. We also propose a new metric for comparing how well an actor's emotion is reconstructed in the generated avatar. Our results show that our approach outperforms state of the art audio-driven avatar generation methods across several metrics. A demo video can be found at \url{https://youtu.be/QSyMl3vV0pA}
翻译:我们提出一个基于3D的生成 2D 变异器的3D 3D 方法, 由声音输入驱动, 直接控制情感, 并用颗粒控制 。 先前的方法由于音频图象的多种到多种性质, 无法实现现实的动画。 我们通过在音频到表达生成过程中引入对抗性损失来缓解这一问题。 这消除了基于回归模型的平滑效果, 有助于改善生成的变异器的现实主义和表达性。 我们还注意到, 在生成口腔内部时, 音频应该直接使用, 而其他3D 方法则不会尝试。 我们用基于解析的音质质质质素解决这个问题。 为了评估我们的方法性能, 我们还进行了定量和定性实验, 包括一项用户研究。 我们还提出了一个新的衡量尺度, 以比较一个演员的情感在生成的变异体中重建得有多好。 我们的结果显示, 我们的方法在生成口腔驱动的变异体生成方法时, 超越了多个计量器的内部状态。 我们用音质的视频可以在 QM / Q 。</s>