This paper proposes a unified framework dubbed Multi-view and Temporal Fusing Transformer (MTF-Transformer) to adaptively handle varying view numbers and video length without camera calibration in 3D Human Pose Estimation (HPE). It consists of Feature Extractor, Multi-view Fusing Transformer (MFT), and Temporal Fusing Transformer (TFT). Feature Extractor estimates 2D pose from each image and fuses the prediction according to the confidence. It provides pose-focused feature embedding and makes subsequent modules computationally lightweight. MFT fuses the features of a varying number of views with a novel Relative-Attention block. It adaptively measures the implicit relative relationship between each pair of views and reconstructs more informative features. TFT aggregates the features of the whole sequence and predicts 3D pose via a transformer. It adaptively deals with the video of arbitrary length and fully unitizes the temporal information. The migration of transformers enables our model to learn spatial geometry better and preserve robustness for varying application scenarios. We report quantitative and qualitative results on the Human3.6M, TotalCapture, and KTH Multiview Football II. Compared with state-of-the-art methods with camera parameters, MTF-Transformer obtains competitive results and generalizes well to dynamic capture with an arbitrary number of unseen views.
翻译:本文提出一个统一框架,称为多视图和时空引信变异器(MTF- Transformation),用于适应性地处理3D人类粒子动画(HPE)中不进行摄影校准的不同视图数字和视频长度,不进行摄影校准。它由地貌提取器、多视图变异器(MFT)和TFT组成,通过变异器将整个序列的特征汇总并预测3D的外观变异器。每个图像的特征提取器根据信任度对每个图像进行2D进行预测,提供以表面为重点的特征嵌入,并使随后的模块具有计算性轻度。MFT将各种观点的特征与新的相对注意块结合。它适应性地测量每对一对观点之间的隐含的相对关系,并重建更多信息性特征。TFTFC综合了整个序列的特征,并通过变异器预测3D构成。它适应性地处理任意长度的视频,并使时间信息完全集中。变异器的迁移使我们的模型能够学习更好的空间几度测量,并保持不同应用情景情景的稳健健。我们报告关于人3.、全图象、全局、全局、全局、全局、全局、全局、全局、全局、全局、全局性、全局、全局、全局、全局性、全局、全局、全局性、全局性、全局性、全局性、全局性、多观、全局性、全局性、全局性、全局性、全局性、图图图图图图图、全局性、全局性、图、图、图、全局性、图。