Video segmentation consists of a frame-by-frame selection process of meaningful areas related to foreground moving objects. Some applications include traffic monitoring, human tracking, action recognition, efficient video surveillance, and anomaly detection. In these applications, it is not rare to face challenges such as abrupt changes in weather conditions, illumination issues, shadows, subtle dynamic background motions, and also camouflage effects. In this work, we address such shortcomings by proposing a novel deep learning video segmentation approach that incorporates residual information into the foreground detection learning process. The main goal is to provide a method capable of generating an accurate foreground detection given a grayscale video. Experiments conducted on the Change Detection 2014 and on the private dataset PetrobrasROUTES from Petrobras support the effectiveness of the proposed approach concerning some state-of-the-art video segmentation techniques, with overall F-measures of $\mathbf{0.9535}$ and $\mathbf{0.9636}$ in the Change Detection 2014 and PetrobrasROUTES datasets, respectively. Such a result places the proposed technique amongst the top 3 state-of-the-art video segmentation methods, besides comprising approximately seven times less parameters than its top one counterpart.
翻译:视频截图由与前景移动天体有关的有意义的区域逐个框架选择过程组成。有些应用包括交通监测、人类跟踪、行动识别、高效视频监视和异常检测。在这些应用中,面临诸如天气条件突变、照明问题、阴影、微妙动态背景动作和迷彩效应等挑战的情况并不罕见。在这项工作中,我们通过提出将残余信息纳入前景移动天体探测学习过程的新型深层次学习视频截面分析方法来解决这些缺陷。主要目的是提供一种能够根据灰色视频生成准确的地面表面探测的方法。在2014年变化探测和Petrobras的私人数据集PetrosROUTS上进行的实验支持了一些最新视频分化技术的拟议方法的有效性,其总体F度为$\mathbf{0.9535}美元和$\mathbf{0.9636}。在2014年变化探测和PetrobrasROUTES数据集中,分别将拟议的技术置于最高级的3项参数之外。这种结果将大约7项图像段置于最上。