Background subtraction is a significant task in computer vision and an essential step for many real world applications. One of the challenges for background subtraction methods is dynamic background, which constitute stochastic movements in some parts of the background. In this paper, we have proposed a new background subtraction method, called DBSGen, which uses two generative neural networks, one for dynamic motion removal and another for background generation. At the end, the foreground moving objects are obtained by a pixel-wise distance threshold based on a dynamic entropy map. The proposed method has a unified framework that can be optimized in an end-to-end and unsupervised fashion. The performance of the method is evaluated over dynamic background sequences and it outperforms most of state-of-the-art methods. Our code is publicly available at https://github.com/FatemeBahri/DBSGen.
翻译:在计算机视野中,背景减色是一项重要任务,也是许多现实世界应用中的一个重要步骤。背景减色方法的挑战之一是动态背景,这是背景某些部分的随机运动。在本文中,我们提出了一个新的背景减色方法,称为DBSGen,它使用两个基因神经网络,一个用于动态运动去除,另一个用于背景生成。在结尾,前景移动对象是通过一个以动态英特罗比图为基础的像素-向距离阈值获得的。拟议方法有一个统一的框架,可以以终端到终端和不受监督的方式优化。该方法的性能通过动态背景序列进行评估,并优于大多数最先进的方法。我们的代码可在https://github.com/FatemeBahri/DBSGen上公开查阅。