Event camera is an emerging bio-inspired vision sensors that report per-pixel brightness changes asynchronously. It holds noticeable advantage of high dynamic range, high speed response, and low power budget that enable it to best capture local motions in uncontrolled environments. This motivates us to unlock the potential of event cameras for human pose estimation, as the human pose estimation with event cameras is rarely explored. Due to the novel paradigm shift from conventional frame-based cameras, however, event signals in a time interval contain very limited information, as event cameras can only capture the moving body parts and ignores those static body parts, resulting in some parts to be incomplete or even disappeared in the time interval. This paper proposes a novel densely connected recurrent architecture to address the problem of incomplete information. By this recurrent architecture, we can explicitly model not only the sequential but also non-sequential geometric consistency across time steps to accumulate information from previous frames to recover the entire human bodies, achieving a stable and accurate human pose estimation from event data. Moreover, to better evaluate our model, we collect a large scale multimodal event-based dataset that comes with human pose annotations, which is by far the most challenging one to the best of our knowledge. The experimental results on two public datasets and our own dataset demonstrate the effectiveness and strength of our approach. Code can be available online for facilitating the future research.
翻译:事件相机是一种新兴的仿生视觉传感器,可以异步地报告每个像素的亮度变化。它具有高动态范围、高速响应和低功率预算等显著优势,在非受控环境下可以最好地捕捉局部运动。这促使我们挖掘事件相机在人体姿态估计方面的潜力,因为使用事件相机进行人体姿态估计鲜有相关研究。然而,由于与传统基于帧的相机不同,事件信号在时间间隔内包含的信息非常有限,因为事件相机只能捕捉移动的身体部位,并忽略那些静态身体部位,导致一些部分在时间间隔内是不完整的甚至消失了。为解决这个问题,本文提出了一种新颖的密集连接循环神经网络体系结构。通过这种循环体系结构,我们可以明确地对先前帧的序列和非序列几何一致性进行建模,以累积信息来恢复整个人体,从而实现从事件数据中稳定且准确地估计人体姿态。此外,为了更好地评估我们的模型,我们收集了一组带有人体姿态标注的大规模多模式事件数据集,这是目前为止最具挑战性的事件数据集之一。对两个公共数据集和我们自己的数据集进行的实验结果证明了我们方法的有效性和强大性。为便于未来研究,本文提供了代码在线开放。