We present VoxelTrack for multi-person 3D pose estimation and tracking from a few cameras which are separated by wide baselines. It employs a multi-branch network to jointly estimate 3D poses and re-identification (Re-ID) features for all people in the environment. In contrast to previous efforts which require to establish cross-view correspondence based on noisy 2D pose estimates, it directly estimates and tracks 3D poses from a 3D voxel-based representation constructed from multi-view images. We first discretize the 3D space by regular voxels and compute a feature vector for each voxel by averaging the body joint heatmaps that are inversely projected from all views. We estimate 3D poses from the voxel representation by predicting whether each voxel contains a particular body joint. Similarly, a Re-ID feature is computed for each voxel which is used to track the estimated 3D poses over time. The main advantage of the approach is that it avoids making any hard decisions based on individual images. The approach can robustly estimate and track 3D poses even when people are severely occluded in some cameras. It outperforms the state-of-the-art methods by a large margin on three public datasets including Shelf, Campus and CMU Panoptic.


翻译:我们为多人 3D 提供VoxelTracack, 与由宽度基线分隔的几台照相机进行估计和跟踪。它使用多部门网络,共同估计环境中所有人3D 配置和再识别(Re-ID)特征。与以前根据噪音 2D 配置的估计数建立交叉视图通信的努力相比,它直接估计和跟踪3D 3D 配置,由多视图像构建的基于 3D voxel 的表达式组成。我们首先通过常规 voxel 将3D 空间分解开来,并计算每个 voxel 的特性矢量矢量,方法是通过平均从所有观点中逆向预测的身体联合热量图。我们估计3D 3D 配置来自 voxel 的表示式,方法是预测每个 voxel 是否包含一个特定的机构联合体。同样,它为每个用于跟踪估计 3D 配置基于多维x 图像的3D, 这种方法的主要优点是避免根据个人图像做出任何硬决定。这个方法可以稳健地估计和跟踪 3D 3D 姿势, 即使人们在大型的离心架上, 3 平层 。

0
下载
关闭预览

相关内容

MonoGRNet:单目3D目标检测的通用框架(TPAMI2021)
专知会员服务
17+阅读 · 2021年5月3日
专知会员服务
15+阅读 · 2021年4月3日
【ICML2020】多视角对比图表示学习,Contrastive Multi-View GRL
专知会员服务
79+阅读 · 2020年6月11日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
59+阅读 · 2019年10月17日
【泡泡汇总】CVPR2019 SLAM Paperlist
泡泡机器人SLAM
14+阅读 · 2019年6月12日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
TCN v2 + 3Dconv 运动信息
CreateAMind
4+阅读 · 2019年1月8日
DPOD: Dense 6D Pose Object Detector in RGB images
Arxiv
5+阅读 · 2019年2月28日
Joint Monocular 3D Vehicle Detection and Tracking
Arxiv
8+阅读 · 2018年12月2日
VIP会员
Top
微信扫码咨询专知VIP会员