We present a novel approach for tracking multiple people in video. Unlike past approaches which employ 2D representations, we focus on using 3D representations of people, located in three-dimensional space. To this end, we develop a method, Human Mesh and Appearance Recovery (HMAR) which in addition to extracting the 3D geometry of the person as a SMPL mesh, also extracts appearance as a texture map on the triangles of the mesh. This serves as a 3D representation for appearance that is robust to viewpoint and pose changes. Given a video clip, we first detect bounding boxes corresponding to people, and for each one, we extract 3D appearance, pose, and location information using HMAR. These embedding vectors are then sent to a transformer, which performs spatio-temporal aggregation of the representations over the duration of the sequence. The similarity of the resulting representations is used to solve for associations that assigns each person to a tracklet. We evaluate our approach on the Posetrack, MuPoTs and AVA datasets. We find that 3D representations are more effective than 2D representations for tracking in these settings, and we obtain state-of-the-art performance. Code and results are available at: https://brjathu.github.io/T3DP.
翻译:与以往使用 2D 表示式的方法不同,我们侧重于使用位于三维空间的人的3D 表示式。为此,我们开发了一种方法,即人类光学和外观恢复(HMAR),除了提取3D的人的三维几何图解作为 SMPL 网状外,我们还提取了在网格三角上的外观作为纹理图。这与以往使用 2D 表示式的方法不同,我们侧重于使用位于三维空间的人的3D 表示式。我们通过视频剪辑,我们首先检测与人相对应的3D 表示式,然后通过HMAR提取 3D 外观、 外观和定位信息。这些嵌入的矢量器随后被发送到变异器上,在连续期间对3DMMLMM3 显示式进行随机同步汇总。由此形成的表达式的相似性用于解决指派每个人进行轨迹的协会。我们评估了我们在Posecroad、 MuPOT 和 AVA数据集上的做法。我们发现3D 显示3D 代表式比 2DD DD 代码在这些设置上更有效。