标跟踪是指:给出目标在跟踪视频第一帧中的初始状态(如位置,尺寸),自动估计目标物体在后续帧中的状态。 目标跟踪分为单目标跟踪和多目标跟踪。 人眼可以比较轻松的在一段时间内跟住某个特定目标。但是对机器而言,这一任务并不简单,尤其是跟踪过程中会出现目标发生剧烈形变、被其他目标遮挡或出现相似物体干扰等等各种复杂的情况。过去几十年以来,目标跟踪的研究取得了长足的发展,尤其是各种机器学习算法被引入以来,目标跟踪算法呈现百花齐放的态势。2013年以来,深度学习方法开始在目标跟踪领域展露头脚,并逐渐在性能上超越传统方法,取得巨大的突破。

知识荟萃

目标跟踪 (Object Tracking/Visual Tracking) 专知荟萃

入门学习

  1.  运动目标跟踪系列(1-17)

  2. 目标跟踪学习笔记(2-4)

  3. 目标跟踪算法之深度学习方法

  4. 基于深度学习的多目标跟踪算法研究

  5. 从传统方法到深度学习,目标跟踪方法的发展概述

  6. 目标跟踪算法 Visual Tracking Algorithm Introduction.

  7. Online Object Tracking: A Benchmark 论文笔记 和 翻译 - [http://blog.csdn.net/shanglianlm/article/details/47376323], [http://blog.csdn.net/roamer_nuptgczx/article/details/51379191]

  8. 计算机视觉中,目前有哪些经典的目标跟踪算法?

进阶文章

NIPS2013

  • DLT: Naiyan Wang and Dit-Yan Yeung. "Learning A Deep Compact Image Representation for Visual Tracking." NIPS (2013).

CVPR2014

ECCV2014

BMVC2014

ICML2015

CVPR2015

ICCV2015

NIPS2016

  • Learnet: Luca Bertinetto, João F. Henriques, Jack Valmadre, Philip H. S. Torr, Andrea Vedaldi. "Learning feed-forward one-shot learners." NIPS (2016).

CVPR2016

ECCV2016

CVPR2017

ICCV2017

PAMI & IJCV & TIP

ArXiv

Benchmark

综述

  1. Visual Tracking: An Experimental Survey. PAMI2014.
    - [http://ieeexplore.ieee.org/document/6671560/], [https://dl.acm.org/citation.cfm?id=2693387]
    - 代码:[http://alov300pp.joomlafree.it/trackers-resource.html]

  2. Online Object Tracking: A Benchmark CVPR2013: Wu Y, Lim J, Yang M H.
    - 网址和代码:[http://cvlab.hanyang.ac.kr/tracker_benchmark/benchmark_v10.html]

  3. A survey of datasets for visual tracking
    - [https://link.springer.com/article/10.1007/s00138-015-0713-y]

  4. Siamese Learning Visual Tracking: A Survey

  5. A survey on multiple object tracking algorithm

Tutorial

  1. Object Tracking
  2. Stanford cs231b Lecture 5: Visual Tracking by Alexandre Alahi Stanford Vision Lab

代码

  1. Hierarchical Convolutional Features for Visual Tracking
  2. Robust Visual Tracking via Convolutional Networks
  3. Learning Multi-Domain Convolutional Neural Networks for Visual Tracking
  4. Understanding and Diagnosing Visual Tracking Systems
  5. Visual Tracking with Fully Convolutional Networks
  6. Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks
  7. Learning to Track at 100 FPS with Deep Regression Networks
  8. Fully-Convolutional Siamese Networks for Object Tracking
  9. Spatially Supervised Recurrent Convolutional Neural Networks for Visual Object Tracking
  10. Unsupervised Learning from Continuous Video in a Scalable Predictive Recurrent Network
  11. ECO: Efficient Convolution Operators for Tracking
  12. End-to-end representation learning for Correlation Filter based tracking
  13. Context-Aware Correlation Filter Tracking
  14. CREST: Convolutional Residual Learning for Visual Tracking
  15. 中科院自动化所胡卫明老师组的博士生王强整理的一些benchmark结果以及论文汇总(好多是参考他的,再次感谢)
  16. Benchmark Results of Correlation Filters, 相关滤波这几年在tracking领域应用非常广,效果也很惊人,这是总结的近几年相关的文章,上面进阶文章大多数都有了,但是这个Github链接 把CF 变形的方法都罗列分类的很齐全,建议收藏。
    - [https://github.com/HakaseH/CF_benchmark_results]

领域专家

  1. Ming-Hsuan Yang[http://faculty.ucmerced.edu/mhyang/]
  • Ming-HsuanYang视觉跟踪当之无愧第一人,后面的人基本上都和其有合作关系,他引已上万
  • 代表作: - Robust Visual Tracking via Consistent Low-Rank Sparse Learning - FCT,IJCV2014:Fast Compressive Tracking - RST,PAMI2014:Robust Superpixel Tracking; SPT,ICCV2011, Superpixeltracking - SVD,TIP2014:Learning Structured Visual Dictionary for Object Tracking - ECCV2014: Spatio temporalBackground Subtraction Using Minimum Spanning Tree and Optical Flow - PAMI2011:Robust Object Tracking with Online Multiple Instance Learning - MIT,CVPR2009: Visual tracking with online multiple instance learning - IJCV2008: Incremental Learning for Robust Visual Tracking
  1. Haibin Ling
  2. Huchuan Lu
  3. Hongdong Li
  4. Lei Zhang
  1. Xiaogang Wang
  1. Matej Kristan
  1. João F. Henriques
  2. Martin Danelljan
  1. Kaihua Zhang
  1. Hamed Kiani
  1. Luca Bertinetto
  1. Tianzhu Zhang

datasets

  1. OTB
  2. VOT

初步版本,水平有限,有错误或者不完善的地方,欢迎大家提建议和补充,会一直保持更新,本文为专知内容组原创内容,未经允许不得转载,如需转载请发送邮件至fangquanyi@gmail.com 或 联系微信专知小助手(Rancho_Fang)

敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

VIP内容

本文提出6个用于Siamese目标跟踪的新匹配算子,基于Ocean进行改进,表现SOTA!性能优于KYS、SiamBAN等网络,速度高达50 FPS!代码即将开源!

跟踪近年来取得了突破性的性能,其本质是高效匹配算子互相关及其变体。除了显著的成功之外,重要的是要注意启发式匹配网络设计在很大程度上依赖于专家经验。此外,我们通过实验发现,在所有具有挑战性的环境中,一个唯一的匹配算子很难保证稳定跟踪。因此,在这项工作中,我们从特征融合的角度而不是显式相似性学习的角度引入了六种新颖的匹配算子,即串联、逐点加法、成对关系、FiLM、简单Transformer和转导引导,以探索更多的可行性匹配运算符选择。分析揭示了这些算子对不同环境退化类型的选择性适应性,这激励我们将它们结合起来探索互补的特征。为此,我们提出二进制通道操作(BCM)来搜索这些算子的最佳组合。BCM 通过学习其对其他跟踪步骤的贡献来决定重新训练或丢弃一个算子。通过将学习到的匹配网络插入到强大的基线跟踪器 Ocean 中,我们的模型在 OTB100、LaSOT 和 TrackingNet 上分别获得了 67.2→71.4、52.6→58.3、70.3→76.0 的有利增益。值得注意的是,我们的跟踪器称为 AutoMatch,使用的训练数据/时间比基线跟踪器少一半,并且使用 PyTorch 以 50 FPS 运行。

https://www.zhuanzhi.ai/paper/d9f8991dc443b0e2626a5478daf291c8

成为VIP会员查看完整内容
0
8

最新内容

Position sensitive detectors (PSDs) offer possibility to track single active marker's two (or three) degrees of freedom (DoF) position with a high accuracy, while having a fast response time with high update frequency and low latency, all using a very simple signal processing circuit. However they are not particularly suitable for 6-DoF object pose tracking system due to lack of orientation measurement, limited tracking range, and sensitivity to environmental variation. We propose a novel 6-DoF pose tracking system for a rigid object tracking requiring a single active marker. The proposed system uses a stereo-based PSD pair and multiple Inertial Measurement Units (IMUs). This is done based on a practical approach to identify and control the power of Infrared-Light Emitting Diode (IR-LED) active markers, with an aim to increase the tracking work space and reduce the power consumption. Our proposed tracking system is validated with three different work space sizes and for static and dynamic positional accuracy using robotic arm manipulator with three different dynamic motion patterns. The results show that the static position root-mean-square (RMS) error is 0.6mm. The dynamic position RMS error is 0.7-0.9mm. The orientation RMS error is between 0.04 and 0.9 degree at varied dynamic motion. Overall, our proposed tracking system is capable of tracking a rigid object pose with sub-millimeter accuracy at the mid range of the work space and sub-degree accuracy for all work space under a lab setting.

0
0
下载
预览

最新论文

Position sensitive detectors (PSDs) offer possibility to track single active marker's two (or three) degrees of freedom (DoF) position with a high accuracy, while having a fast response time with high update frequency and low latency, all using a very simple signal processing circuit. However they are not particularly suitable for 6-DoF object pose tracking system due to lack of orientation measurement, limited tracking range, and sensitivity to environmental variation. We propose a novel 6-DoF pose tracking system for a rigid object tracking requiring a single active marker. The proposed system uses a stereo-based PSD pair and multiple Inertial Measurement Units (IMUs). This is done based on a practical approach to identify and control the power of Infrared-Light Emitting Diode (IR-LED) active markers, with an aim to increase the tracking work space and reduce the power consumption. Our proposed tracking system is validated with three different work space sizes and for static and dynamic positional accuracy using robotic arm manipulator with three different dynamic motion patterns. The results show that the static position root-mean-square (RMS) error is 0.6mm. The dynamic position RMS error is 0.7-0.9mm. The orientation RMS error is between 0.04 and 0.9 degree at varied dynamic motion. Overall, our proposed tracking system is capable of tracking a rigid object pose with sub-millimeter accuracy at the mid range of the work space and sub-degree accuracy for all work space under a lab setting.

0
0
下载
预览
Top