Model update lies at the heart of object tracking.Generally, model update is formulated as an online learning problem where a target model is learned over the online training dataset. Our key innovation is to \emph{learn the online learning algorithm itself using large number of offline videos}, i.e., \emph{learning to update}. The learned updater takes as input the online training dataset and outputs an updated target model. As a first attempt, we design the learned updater based on recurrent neural networks (RNNs) and demonstrate its application in a template-based tracker and a correlation filter-based tracker. Our learned updater consistently improves the base trackers and runs faster than realtime on GPU while requiring small memory footprint during testing. Experiments on standard benchmarks demonstrate that our learned updater outperforms commonly used update baselines including the efficient exponential moving average (EMA)-based update and the well-designed stochastic gradient descent (SGD)-based update. Equipped with our learned updater, the template-based tracker achieves state-of-the-art performance among realtime trackers on GPU.
翻译:模型更新位于天体跟踪的核心。 一般来说, 模型更新是一个在线学习问题, 通过在线培训数据集学习了目标模型。 我们的关键创新是使用大量离线视频, 即 \ emph{ 学习更新} 进行在线学习算法。 学习的更新者将在线培训数据集和产出作为输入内容, 一个更新的目标模型。 作为第一次尝试, 我们根据经常性神经网络设计了学习的更新器, 并在基于模板的跟踪器和基于相关过滤器的跟踪器中演示了该更新器的应用。 我们学习的更新器不断改进基础跟踪器, 并且运行速度高于实时的GPU, 同时在测试中需要小的记忆足迹。 测试标准基准显示, 我们学习的更新器比常用的更新基线( 高效指数移动平均( EMA) 更新), 以及精心设计的随机梯度下降( SGD) 更新。 与我们学习的更新器一起配置, 模板的跟踪器在实时跟踪器中实现了状态。