This work addresses the issue of motion compensation and pattern tracking in event camera data. An event camera generates asynchronous streams of events triggered independently by each of the pixels upon changes in the observed intensity. Providing great advantages in low-light and rapid-motion scenarios, such unconventional data present significant research challenges as traditional vision algorithms are not directly applicable to this sensing modality. The proposed method decomposes the tracking problem into a local SE(2) motion-compensation step followed by a homography registration of small motion-compensated event batches. The first component relies on Gaussian Process (GP) theory to model the continuous occupancy field of the events in the image plane and embed the camera trajectory in the covariance kernel function. In doing so, estimating the trajectory is done similarly to GP hyperparameter learning by maximising the log marginal likelihood of the data. The continuous occupancy fields are turned into distance fields and used as templates for homography-based registration. By benchmarking the proposed method against other state-of-the-art techniques, we show that our open-source implementation performs high-accuracy motion compensation and produces high-quality tracks in real-world scenarios.
翻译:这项工作涉及事件相机数据中的运动补偿和模式跟踪问题。事件相机在观察到的强度变化时产生由每个像素独立引发的事件的不同步流。在低光和快速移动情景中提供了巨大的优势,这些非常规数据带来了巨大的研究挑战,因为传统视觉算法并不直接适用于这种感测模式。拟议方法将跟踪问题分解成一个本地的 SE(2) 运动补偿步骤,随后对小动作补偿事件批量进行同声登记。第一个组成部分依靠高西亚进程理论来模拟图像平面上的事件连续占用场,并将相机轨迹嵌入共变心功能中。在这样做时,对轨迹进行估算与GP超光度学习相似,将数据的日志边际可能性最大化。连续占用场转变为远程字段,并用作基于同声学注册的模板。通过对照其他状态技术对拟议方法进行基准测试,我们显示,我们的开放源实施过程进行了高精确的动作补偿,并在现实情景中产生高质量轨道。</s>