Garments are important to humans. A visual system that can estimate and track the complete garment pose can be useful for many downstream tasks and real-world applications. In this work, we present a complete package to address the category-level garment pose tracking task: (1) A recording system VR-Garment, with which users can manipulate virtual garment models in simulation through a VR interface. (2) A large-scale dataset VR-Folding, with complex garment pose configurations in manipulation like flattening and folding. (3) An end-to-end online tracking framework GarmentTracking, which predicts complete garment pose both in canonical space and task space given a point cloud sequence. Extensive experiments demonstrate that the proposed GarmentTracking achieves great performance even when the garment has large non-rigid deformation. It outperforms the baseline approach on both speed and accuracy. We hope our proposed solution can serve as a platform for future research. Codes and datasets are available in https://garment-tracking.robotflow.ai.
翻译:衣物对人类很重要。一个能够估计和跟踪完整衣物姿态的视觉系统可以对许多下游任务和现实应用有用。在这项工作中,我们提出了一个完整的解决方案来解决类别级衣物姿态跟踪任务: (1)一个记录系统VR-Garment,用户可以通过VR界面在模拟中操作虚拟衣物模型。 (2)一个大规模数据集VR-Folding,其中包含像展平和折叠等操作中的复杂衣物姿态配置。(3)一种端到端的在线跟踪框架GarmentTracking,它可以在给定点云序列的情况下预测规范空间和任务空间中的完整衣服姿态。广泛的实验表明,所提出的GarmentTracking即使衣物具有大的非刚性变形,也可以取得很好的性能。它在速度和准确性方面都优于基线方法。我们希望我们提出的解决方案可以成为未来研究的平台。代码和数据集可在https://garment-tracking.robotflow.ai获得。