3D facial modelling and animation in computer vision and graphics traditionally require either digital artist's skill or complex pipelines with objective-function-based solvers to fit models to motion capture. This inaccessibility of quality modelling to a non-expert is an impediment to effective quantitative study of facial stimuli in experimental psychology. The EmoGen methodology we present in this paper solves the issue democratising facial modelling technology. EmoGen is a robust and configurable framework letting anyone author arbitrary quantifiable facial expressions in 3D through a user-guided genetic algorithm search. Beyond sample generation, our methodology is made complete with techniques to analyse distributions of these expressions in a principled way. This paper covers the technical aspects of expression generation, specifically our production-quality facial blendshape model, automatic corrective mechanisms of implausible facial configurations in the absence of artist's supervision and the genetic algorithm implementation employed in the model space search. Further, we provide a comparative evaluation of ways to quantify generated facial expressions in the blendshape and geometric domains and compare them theoretically and empirically. The purpose of this analysis is 1. to define a similarity cost function to simulate model space search for convergence and parameter dependence assessment of the genetic algorithm and 2. to inform the best practices in the data distribution analysis for experimental psychology.
翻译:计算机视觉和图形中的3D面部建模和动画通常需要数字艺术家的技能,或者需要复杂的管道和基于客观功能的解算器来适应模型的移动。这种无法向非专家提供质量建模的做法妨碍了对实验心理学中的面部刺激进行有效的定量研究。我们在本文件中介绍的EmoGen方法解决了面部建模技术民主化问题。EmoGen是一个强大和可配置的框架,让任何作者通过用户引导的基因算法搜索来量化3D中任意可量化的面部表达。除了样本生成之外,我们的方法还包括以原则方式分析这些表达方式的分布的技术。本文涵盖了表达方式的技术方面,特别是我们生产质量质量的面部混合形状模型,以及在没有艺术家监督和模型空间搜索中使用的遗传算法实施的情况下,不相容的面部结构自动纠正机制。此外,我们还比较评估了如何量化混合形状和几何区域生成的面部面部面部面部表达方式,并对其进行理论和实验性比较。这一分析的目的是为模拟空间搜索的模拟数据搜索和实验性矩阵分析的模型分析确定类似性成本模型分析分析的模型分析的模型分析,为模型分析的模型的模型分析,为模型的模拟空间研究的模型分析的模型分析提供。