Statistical 3D shape models of the head, hands, and fullbody are widely used in computer vision and graphics. Despite their wide use, we show that existing models of the head and hands fail to capture the full range of motion for these parts. Moreover, existing work largely ignores the feet, which are crucial for modeling human movement and have applications in biomechanics, animation, and the footwear industry. The problem is that previous body part models are trained using 3D scans that are isolated to the individual parts. Such data does not capture the full range of motion for such parts, e.g. the motion of head relative to the neck. Our observation is that full-body scans provide important information about the motion of the body parts. Consequently, we propose a new learning scheme that jointly trains a full-body model and specific part models using a federated dataset of full-body and body-part scans. Specifically, we train an expressive human body model called SUPR (Sparse Unified Part-Based Human Representation), where each joint strictly influences a sparse set of model vertices. The factorized representation enables separating SUPR into an entire suite of body part models. Note that the feet have received little attention and existing 3D body models have highly under-actuated feet. Using novel 4D scans of feet, we train a model with an extended kinematic tree that captures the range of motion of the toes. Additionally, feet deform due to ground contact. To model this, we include a novel non-linear deformation function that predicts foot deformation conditioned on the foot pose, shape, and ground contact. We train SUPR on an unprecedented number of scans: 1.2 million body, head, hand and foot scans. We quantitatively compare SUPR and the separated body parts and find that our suite of models generalizes better than existing models. SUPR is available at http://supr.is.tue.mpg.de
翻译:头部、 手和全体的统计 3D 形状模型被广泛用于计算机视觉和图形。 尽管它们被广泛使用, 我们却显示, 头部和手部的现有模型无法捕捉到这些部位的全部运动。 此外, 现有的工作在很大程度上忽略了脚部, 这对模拟人类运动至关重要, 并在生物机械、 动画和鞋类行业中应用。 问题在于, 前身部分模型是使用3D 扫描来训练的, 这些部位是孤立于各个部分的。 这些数据并不包含这些部位的全部运动。 例如, 头部和手部的动作运动。 我们观察的是, 全体扫描无法提供关于这些部位运动的完整信息。 因此, 我们提出一个新的学习方案, 联合训练一个全体和身体部分。 具体地说, 我们训练一个叫SUPRI( Sparse Unility Part- Bed Human Autive) 模型。 每一个部位的动作都严格地影响着模型的细数 。 我们观察显示的是, 右部的右部, 将Slevor demode fe fe fe fe fe 显示, 将S demodemodemodemodemodeal a der a sudeal demodeal demodeal a suder a suder a suder a suder a suder a suder a suder a suder demod suder a suder demod suder smod smodes.