We introduce a Transformer based 6D Object Pose Estimation framework VideoPose, comprising an end-to-end attention based modelling architecture, that attends to previous frames in order to estimate accurate 6D Object Poses in videos. Our approach leverages the temporal information from a video sequence for pose refinement, along with being computationally efficient and robust. Compared to existing methods, our architecture is able to capture and reason from long-range dependencies efficiently, thus iteratively refining over video sequences. Experimental evaluation on the YCB-Video dataset shows that our approach is on par with the state-of-the-art Transformer methods, and performs significantly better relative to CNN based approaches. Further, with a speed of 33 fps, it is also more efficient and therefore applicable to a variety of applications that require real-time object pose estimation. Training code and pretrained models are available at https://github.com/ApoorvaBeedu/VideoPose
翻译:我们引入了一个基于6D天体脉冲估算的变换器框架视频Pose, 其中包含一个基于端到端注意的建模结构,它与以前的框架相配合,以估算视频中的精确 6D 天体脉冲。我们的方法利用视频序列的时间信息进行调整,同时进行精细,同时进行高效和稳健的计算。与现有方法相比,我们的架构能够有效地从长距离依赖中捕捉和理解,从而对视频序列进行迭接性精炼。YCB-Video数据集的实验性评估表明,我们的方法与最先进的变异器方法相当,而且与CNN的变异器相比表现得要好得多。此外,以33英尺的速度,它也更为有效,因此适用于需要实时物体的多种应用,从而需要实时物体进行估计。培训代码和预先培训模型可在https://github.com/Apoorvabeedu/VideoPose查阅。