Event camera is an emerging bio-inspired vision sensors that report per-pixel brightness changes asynchronously. It holds noticeable advantage of high dynamic range, high speed response, and low power budget that enable it to best capture local motions in uncontrolled environments. This motivates us to unlock the potential of event cameras for human pose estimation, as the human pose estimation with event cameras is rarely explored. Due to the novel paradigm shift from conventional frame-based cameras, however, event signals in a time interval contain very limited information, as event cameras can only capture the moving body parts and ignores those static body parts, resulting in some parts to be incomplete or even disappeared in the time interval. This paper proposes a novel densely connected recurrent architecture to address the problem of incomplete information. By this recurrent architecture, we can explicitly model not only the sequential but also non-sequential geometric consistency across time steps to accumulate information from previous frames to recover the entire human bodies, achieving a stable and accurate human pose estimation from event data. Moreover, to better evaluate our model, we collect a large scale multimodal event-based dataset that comes with human pose annotations, which is by far the most challenging one to the best of our knowledge. The experimental results on two public datasets and our own dataset demonstrate the effectiveness and strength of our approach. Code can be available online for facilitating the future research.
翻译:事件相机是一种新兴的仿生视觉传感器,可以异步地报告每个像素的亮度变化。它具有高动态范围、高速度响应和低能耗等显著优势,使它能够在不受控制的环境中最好地捕捉局部运动。这激励我们发挥事件相机在人体姿态估计中的潜力,因为使用事件相机进行人体姿态估计是很少被探索的。然而,由于与传统帧相机不同的新兴范例,时间间隔内的事件信号包含的信息非常有限,因为事件相机只能捕捉到运动的身体部位,并忽略那些静态的身体部位,导致某些部分在时间间隔内是不完整甚至消失的。针对这个问题,本文提出了一种新颖的密集连接循环神经网络模型,通过这个模型,我们可以显式地模拟时间步长间的连续但不连续的几何一致性,以累积上一帧的信息来恢复整个人体,从事件数据中实现稳定和准确的人体姿态估计。此外,为了更好地评估我们的模型,在支持人体姿态注释的大规模多模态事件数据集上进行了测试,这是目前我们所知的最具挑战性的数据集之一。在两个公共数据集和我们自己的数据集上的实验证据表明了我们方法的有效性和优势。代码可以在线获取,以促进未来的研究。