Identifying independently moving objects is an essential task for dynamic scene understanding. However, traditional cameras used in dynamic scenes may suffer from motion blur or exposure artifacts due to their sampling principle. By contrast, event-based cameras are novel bio-inspired sensors that offer advantages to overcome such limitations. They report pixelwise intensity changes asynchronously, which enables them to acquire visual information at exactly the same rate as the scene dynamics. We develop a method to identify independently moving objects acquired with an event-based camera, i.e., to solve the event-based motion segmentation problem. We cast the problem as an energy minimization one involving the fitting of multiple motion models. We jointly solve two subproblems, namely event cluster assignment (labeling) and motion model fitting, in an iterative manner by exploiting the structure of the input event data in the form of a spatio-temporal graph. Experiments on available datasets demonstrate the versatility of the method in scenes with different motion patterns and number of moving objects. The evaluation shows state-of-the-art results without having to predetermine the number of expected moving objects. We release the software and dataset under an open source licence to foster research in the emerging topic of event-based motion segmentation.
翻译:独立移动天体是动态场景理解的一项基本任务。然而,动态场景中使用的传统相机可能因其取样原则而受到运动模糊或暴露文物的影响。相反,以事件为基础的相机是具有生物启发的新型传感器,具有克服这些限制的好处。它们无休无止地报告像素的强度变化,从而使它们能够以与场景动态完全相同的速率获得视觉信息。我们开发了一种方法来识别以事件为基础的相机获得的物体独立移动的方法,即解决以事件为基础的运动分割问题。我们把这个问题当作一个能量最小化的问题,涉及安装多个运动模型。我们共同解决两个子问题,即事件集群分配(标签)和运动模型安装,以迭接的方式,利用输入事件数据的结构,以空间时速图的形式进行。我们对可用数据集进行实验,以不同运动模式和移动天体数目的场景显示该方法的多功能性。我们通过评估,在不预先确定预期移动天体数量的情况下,将状态显示其结果。我们共同解决两个子问题,即事件集束(活动集)和运动模型的模型,以迭交式方式,我们根据开发了正在开始的磁段的研究,以推进的轨道,我们根据开发了正在开始一个数据源。