This work proposes a novel framework for visual tracking based on the integration of an iterative particle filter, a deep convolutional neural network, and a correlation filter. The iterative particle filter enables the particles to correct themselves and converge to the correct target position. We employ a novel strategy to assess the likelihood of the particles after the iterations by applying K-means clustering. Our approach ensures a consistent support for the posterior distribution. Thus, we do not need to perform resampling at every video frame, improving the utilization of prior distribution information. Experimental results on two different benchmark datasets show that our tracker performs favorably against state-of-the-art methods.
翻译:这项工作提出了一个基于迭代粒子过滤器、深层进化神经网络和关联过滤器集成的视觉跟踪新框架。 迭代粒子过滤器使粒子能够自我校正并汇合到正确的目标位置。 我们采用了一种新策略来评估迭代后粒子的可能性, 应用 K 手段分组 。 我们的方法确保了对后方分布的一贯支持。 因此, 我们不需要在每个视频框架进行再抽样, 改进先前分布信息的利用。 两种不同基准数据集的实验结果显示, 我们的跟踪器对最先进的方法表现良好 。