3D hand shape and pose estimation from a single depth map is a new and challenging computer vision problem with many applications. Existing methods addressing it directly regress hand meshes via 2D convolutional neural networks, which leads to artefacts due to perspective distortions in the images. To address the limitations of the existing methods, we develop HandVoxNet++, i.e., a voxel-based deep network with 3D and graph convolutions trained in a fully supervised manner. The input to our network is a 3D voxelized-depth-map-based on the truncated signed distance function (TSDF). HandVoxNet++ relies on two hand shape representations. The first one is the 3D voxelized grid of hand shape, which does not preserve the mesh topology and which is the most accurate representation. The second representation is the hand surface that preserves the mesh topology. We combine the advantages of both representations by aligning the hand surface to the voxelized hand shape either with a new neural Graph-Convolutions-based Mesh Registration (GCN-MeshReg) or classical segment-wise Non-Rigid Gravitational Approach (NRGA++) which does not rely on training data. In extensive evaluations on three public benchmarks, i.e., SynHand5M, depth-based HANDS19 challenge and HO-3D, the proposed HandVoxNet++ achieves state-of-the-art performance. In this journal extension of our previous approach presented at CVPR 2020, we gain 41.09% and 13.7% higher shape alignment accuracy on SynHand5M and HANDS19 datasets, respectively. Our method is ranked first on the HANDS19 challenge dataset (Task 1: Depth-Based 3D Hand Pose Estimation) at the moment of the submission of our results to the portal in August 2020.
翻译:3D 手形和从一个单一深度地图上显示的估算是一个具有挑战性的计算机图像问题。 现有方法通过 2D 进化神经网络直接递退手部模件, 这导致人工制品, 因为图像的视觉扭曲。 为解决现有方法的局限性, 我们开发HandVoxNet+++, 即基于 voxel 的深网络, 由 3D 和 图形的完全监督下训练。 我们网络的输入是一个3D vxelized- 深度图象问题。 用于我们网络的输入是3DVeleled 的3D- 网络图像。 HandVoxNet++ 依靠两个手形表达方式直接递退手部 。 HandVxNet+++ 。 首先是 3DVxxx 图像的3DFlorveald 显示的3DRVlationS