计算机视觉是一门研究如何使机器“看”的科学,更进一步的说,就是是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取‘信息’的人工智能系统。

知识荟萃

computer vision

2022.2.19更新

领域报告/资料整理

  1. 《中国工业机器视觉产业发展白皮书》,2020.11,赛迪研究院
  1. 《人工智能医学影像应用与产业分析简报》,2021.6,派瑞医疗,https://pinery.io/assets/documents/2021%E6%B4%BE%E7%91%9E%E5%8C%BB%E7%96%97%E7%A7%91%E6%8A%80-%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD%E5%8C%BB%E5%AD%A6%E5%BD%B1%E5%83%8F%E5%BA%94%E7%94%A8%E4%B8%8E%E4%BA%A7%E4%B8%9A%E5%88%86%E6%9E%90%E7%AE%80%E6%8A%A5.pdf
    3.《机器视觉核心部件龙头:行业高增长与市占率提升共振》,2021.7,浙商证券,https://pdf.dfcfw.com/pdf/H3_AP202107141503712911_1.PDF
    4.《2020年度计算机视觉人才调研报告》,2020,德勤,https://www2.deloitte.com/content/dam/Deloitte/cn/Documents/innovation/deloitte-cn-iddc-2020-china-computer-vision-talent-survey-report-zh-210220.pdf
  2. 《OpenMMLab 计算机视觉开源算法体系》,2021,全球开源技术峰会,https://gotc.oschina.net/uploads/files/09%20%E9%99%88%E6%81%BAOpenMMLab%20%E6%BC%94%E8%AE%B2.pdf
  3. 《未来5-10年计算机视觉发展趋势》,2020,CCF计算机视觉委员会,https://mp.weixin.qq.com/s/ZCULMForCTQTub-INlTprA
  4. “图像分割二十年,盘点影响力最大的10篇论文”,2022,https://www.zhuanzhi.ai/vip/badb6122fb88ee972f5f63cc45eb47d6
  5. “CVPR 二十年,影响力最大的 10 篇论文!”,2022,https://www.zhuanzhi.ai/vip/d049a004e3d4d139d7cf10e0100b3d34

入门学习

  1. 计算机视觉:让冰冷的机器看懂这个多彩的世界 by

  2. 深度学习与视觉计算 by 王亮 中科院自动化所

  3. 如何做好计算机视觉的研究? by 华刚博士

  4. 计算机视觉 微软亚洲研究院系列文章

  5. 卷积神经网络如何进行图像识别

  6. 相似图片搜索的原理 阮一峰

  7. 如何识别图像边缘?  阮一峰

  8. 图像目标检测(Object Detection)原理与实现 (1-6)

  9. 运动目标跟踪系列(1-17) - [http://blog.csdn.net/App_12062011/article/category/6269524/1]

  10. 看图说话的AI小朋友——图像标注趣谈(上,下)

  11. Video Analysis 相关领域介绍之Video Captioning(视频to文字描述)

  12. 从特斯拉到计算机视觉之「图像语义分割」

  13. 视觉求索 公众号相关文章系列,

综述

  1. Annotated Computer Vision Bibliography: Table of Contents. Since 1994 Keith Price从1994年开始做了这个索引,涵盖了所有计算机视觉里面所有topic,所有subtopic的著作,包括论文,教材,还对各类主题的关键词。这个网站频繁更新(最近一次是2017年8月28号),收录每个方向重要期刊,会议文献和书籍,并且保证了所有链接不失效。
  2. What Sparked Video Research in 1877? The Overlooked Role of the Siemens Artificial Eye by Mark Schubin 2017 http://ieeexplore.ieee.org/document/7857854/
  3. Giving machines humanlike eyes. by Posch, C., Benosman, R., Etienne-Cummings, R. 2015 http://ieeexplore.ieee.org/document/7335800/
  4. Seeing is not enough by Tom GellerOberlin, OH https://dl.acm.org/citation.cfm?id=2001276
  5. Visual Tracking: An Experimental Survey https://dl.acm.org/citation.cfm?id=2693387
  6. A survey on object recognition and segmentation techniques http://ieeexplore.ieee.org/document/7724975/
  7. A Review of Image Recognition with Deep Convolutional Neural Network https://link.springer.com/chapter/10.1007/978-3-319-63309-1_7
  8. Recent Advance in Content-based Image Retrieval: A Literature Survey. Wengang Zhou, Houqiang Li, and Qi Tian 2017 https://arxiv.org/pdf/1706.06064.pdf
  9. Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures 2016 https://www.jair.org/media/4900/live-4900-9139-jair.pdf
  10. 基于图像和视频信息的社交关系理解研究综述 2021 https://cjc.ict.ac.cn/online/onlinepaper/wz-20216780932.pdf

进阶论文

Image Classification

  1. Microsoft Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep Residual Learning for Image Recognition [http://arxiv.org/pdf/1512.03385v1.pdf] [[http://image-net.org/challenges/talks/ilsvrc2015_deep_residual_learning_kaiminghe.pdf]]
  2. Microsoft Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, [http://arxiv.org/pdf/1502.01852]
  3. Batch Normalization Sergey Ioffe, Christian Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift[http://arxiv.org/pdf/1502.03167]
  4. GoogLeNet Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, CVPR, 2015. [http://arxiv.org/pdf/1409.4842]
  5. VGG-Net Karen Simonyan and Andrew Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ICLR, 2015. [http://www.robots.ox.ac.uk/vgg/research/verydeep/] [http://arxiv.org/pdf/1409.1556]
  6. AlexNet Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS, 2012. [http://papers.nips.cc/book/advances-in-neural-information-processing-systems-25-2012]

Object Detection

进阶文章

  1. Deep Neural Networks for Object Detection (基于DNN的对象检测)NIPS2013:
  2. R-CNN Rich feature hierarchies for accurate object detection and semantic segmentation:
  3. Fast R-CNN :
  4. Faster R-CNN Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks:
  5. Scalable Object Detection using Deep Neural Networks
  6. Scalable, High-Quality Object Detection
  7. SPP-Net Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
  8. DeepID-Net DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection
  9. Object Detectors Emerge in Deep Scene CNNs
  10. segDeepM: Exploiting Segmentation and Context in Deep Neural Networks for Object Detection
  11. Object Detection Networks on Convolutional Feature Maps
  12. Improving Object Detection with Deep Convolutional Networks via Bayesian Optimization and Structured Prediction
  13. DeepBox: Learning Objectness with Convolutional Networks
  14. Object detection via a multi-region & semantic segmentation-aware CNN model
  15. You Only Look Once: Unified, Real-Time Object Detection
  16. YOLOv2 YOLO9000: Better, Faster, Stronger
  17. AttentionNet: Aggregating Weak Directions for Accurate Object Detection
  18. DenseBox: Unifying Landmark Localization with End to End Object Detection
  19. SSD: Single Shot MultiBox Detector
  20. DSSD : Deconvolutional Single Shot Detector
  21. G-CNN: an Iterative Grid Based Object Detector
  22. HyperNet: Towards Accurate Region Proposal Generation and Joint Object Detection
  23. A MultiPath Network for Object Detection
  24. R-FCN: Object Detection via Region-based Fully Convolutional Networks
  25. A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection
  26. PVANET: Deep but Lightweight Neural Networks for Real-time Object Detection
  27. Feature Pyramid Networks for Object Detection
  28. Learning Chained Deep Features and Classifiers for Cascade in Object Detection
  29. DSOD: Learning Deeply Supervised Object Detectors from Scratch
  30. Focal Loss for Dense Object Detection  ICCV 2017 Best student paper award. Facebook AI Research

Video Classification

  1. Nicolas Ballas, Li Yao, Pal Chris, Aaron Courville, "Delving Deeper into Convolutional Networks for Learning Video Representations", ICLR 2016. [http://arxiv.org/pdf/1511.06432v4.pdf]
  2. Michael Mathieu, camille couprie, Yann Lecun, "Deep Multi Scale Video Prediction Beyond Mean Square Error", ICLR 2016. Paper [http://arxiv.org/pdf/1511.05440v6.pdf]
  3. Donahue, Jeffrey, et al. Long-term recurrent convolutional networks for visual recognition and description CVPR 2015 https://arxiv.org/abs/1411.4389
  4. Karpathy, Andrej, et al. Large-scale Video Classification with Convolutional Neural Networks. CVPR 2014 http://cs.stanford.edu/people/karpathy/deepvideo/
  5. Yue-Hei Ng, Joe, et al. Beyond short snippets: Deep networks for video classification. CVPR 2015 https://arxiv.org/abs/1503.08909
  6. Tran, Du, et al. Learning Spatiotemporal Features with 3D Convolutional Networks. ICCV 2015 https://arxiv.org/abs/1412.0767

Object Tracking

  1. Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han, Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network, arXiv:1502.06796. [http://arxiv.org/pdf/1502.06796]
  2. Hanxi Li, Yi Li and Fatih Porikli, DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking, BMVC, 2014. [http://www.bmva.org/bmvc/2014/files/paper028.pdf]
  3. N Wang, DY Yeung, Learning a Deep Compact Image Representation for Visual Tracking, NIPS, 2013. [http://winsty.net/papers/dlt.pdf]
  4. Chao Ma, Jia-Bin Huang, Xiaokang Yang and Ming-Hsuan Yang, Hierarchical Convolutional Features for Visual Tracking, ICCV 2015 Paper [http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Ma_Hierarchical_Convolutional_Features_ICCV_2015_paper.pdf] [https://github.com/jbhuang0604/CF2]
  5. Lijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu, Visual Tracking with fully Convolutional Networks, ICCV 2015 [http://202.118.75.4/lu/Paper/ICCV2015/iccv15_lijun.pdf] [https://github.com/scott89/FCNT]
  6. Hyeonseob Namand Bohyung Han, Learning Multi-Domain Convolutional Neural Networks for Visual Tracking. [http://arxiv.org/pdf/1510.07945.pdf] [https://github.com/HyeonseobNam/MDNet] [http://cvlab.postech.ac.kr/research/mdnet/]

Segmentation

  1. Alexander Kolesnikov, Christoph Lampert, Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation, ECCV, 2016. [http://pub.ist.ac.at/akolesnikov/files/ECCV2016/main.pdf] [https://github.com/kolesman/SEC]
  2. Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. [http://arxiv.org/pdf/1504.01013]
  3. Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. [http://arxiv.org/pdf/1506.02108]
  4. Deep Parsing Network . Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 [http://arxiv.org/pdf/1509.02634.pdf]
  5. CentraleSuperBoundaries, Iasonas Kokkinos, Surpassing Humans in Boundary Detection using Deep Learning INRIA [http://arxiv.org/pdf/1511.07386]
  6. BoxSup. Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation [http://arxiv.org/pdf/1503.01640]
  7. Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. [http://arxiv.org/pdf/1505.04366]
  8. Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924. [http://arxiv.org/pdf/1506.04924]
  9. Seunghoon Hong, Junhyuk Oh, Bohyung Han, and Honglak Lee, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network. [http://arxiv.org/pdf/1512.07928.pdf] Project Page[http://cvlab.postech.ac.kr/research/transfernet/]
  10. Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks. [http://arxiv.org/pdf/1502.03240]
  11. Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. [http://arxiv.org/pdf/1502.02734]
  12. Mohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015 [http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mostajabi_Feedforward_Semantic_Segmentation_2015_CVPR_paper.pdf]
  13. Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Joint Calibration for Semantic Segmentation. [http://arxiv.org/pdf/1507.01581]
  14. Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015. [http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf]
  15. Bharath Hariharan, Pablo Arbelaez, Ross Girshick, Jitendra Malik, Hypercolumns for Object Segmentation and Fine-Grained Localization, CVPR, 2015. [http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Hariharan_Hypercolumns_for_Object_2015_CVPR_paper.pdf]
  16. Abhishek Sharma, Oncel Tuzel, David W. Jacobs, Deep Hierarchical Parsing for Semantic Segmentation, CVPR, 2015. [http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Sharma_Deep_Hierarchical_Parsing_2015_CVPR_paper.pdf]
  17. Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012. [http://yann.lecun.com/exdb/publis/pdf/farabet-icml-12.pdf]
  18. Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013. [http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf]
  19. Fisher Yu, Vladlen Koltun, "Multi-Scale Context Aggregation by Dilated Convolutions", ICLR 2016, [http://arxiv.org/pdf/1511.07122v2.pdf]
  20. Niloufar Pourian, S. Karthikeyan, and B.S. Manjunath, "Weakly supervised graph based semantic segmentation by learning communities of image-parts", ICCV, 2015, [http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Pourian_Weakly_Supervised_Graph_ICCV_2015_paper.pdf]

Object Recognition

  1. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell [http://arxiv.org/abs/1310.1531]
  2. CNN Features off-the-shelf: an Astounding Baseline for Recognition CVPR 2014 [http://arxiv.org/abs/1403.6382]
  3. HD-CNN: Hierarchical Deep Convolutional Neural Network for Image Classification intro: ICCV 2015 [https://arxiv.org/abs/1410.0736]
  4. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. ImageNet top-5 error: 4.94% [http://arxiv.org/abs/1502.01852]
  5. Humans and deep networks largely agree on which kinds of variation make object recognition harder [http://arxiv.org/abs/1604.06486]
  6. FusionNet: 3D Object Classification Using Multiple Data Representations [https://arxiv.org/abs/1607.05695]
  7. Deep FisherNet for Object Classification [http://arxiv.org/abs/1608.00182]
  8. Factorized Bilinear Models for Image Recognition [https://arxiv.org/abs/1611.05709]
  9. Hyperspectral CNN Classification with Limited Training Samples [https://arxiv.org/abs/1611.09007]
  10. The More You Know: Using Knowledge Graphs for Image Classification [https://arxiv.org/abs/1612.04844]
  11. MaxMin Convolutional Neural Networks for Image Classification [http://webia.lip6.fr/thomen/papers/BlotICIP2016.pdf]
  12. Cost-Effective Active Learning for Deep Image Classification. TCSVT 2016. [https://arxiv.org/abs/1701.03551]
  13. Deep Collaborative Learning for Visual Recognition [https://www.arxiv.org/abs/1703.01229]
  14. Convolutional Low-Resolution Fine-Grained Classification [https://arxiv.org/abs/1703.05393]
  15. Deep Mixture of Diverse Experts for Large-Scale Visual Recognition [https://arxiv.org/abs/1706.07901] Sunrise or Sunset: Selective Comparison Learning for Subtle Attribute Recognition [https://arxiv.org/abs/1707.06335]
  16. Why Do Deep Neural Networks Still Not Recognize These Images?: A Qualitative Analysis on Failure Cases of ImageNet Classification [https://arxiv.org/abs/1709.03439]
  17. B-CNN: Branch Convolutional Neural Network for Hierarchical Classification [https://arxiv.org/abs/1709.09890](
  18. Multiple Object Recognition with Visual Attention [https://arxiv.org/abs/1412.7755]
  19. Multiple Instance Learning Convolutional Neural Networks for Object Recognition [https://arxiv.org/abs/1610.03155]
  20. Deep Learning Face Representation from Predicting 10,000 Classes. intro: CVPR 2014 [http://mmlab.ie.cuhk.edu.hk/pdf/YiSun_CVPR14.pdf]
  21. Deep Learning Face Representation by Joint Identification-Verification  [http://papers.nips.cc/paper/5416-analog-memories-in-a-balanced-rate-based-network-of-e-i-neurons]
  22. Deeply learned face representations are sparse, selective, and robust  [http://arxiv.org/abs/1412.1265]
  23. FaceNet: A Unified Embedding for Face Recognition and Clustering [http://arxiv.org/abs/1503.03832]
  24. Bilinear CNN Models for Fine-grained Visual Recognition http://vis-www.cs.umass.edu/bcnn/
  25. DeepFood: Deep Learning-Based Food Image Recognition for Computer-Aided Dietary Assessment http://arxiv.org/abs/1606.05675
  26. Multi-attribute Learning for Pedestrian Attribute Recognition in Surveillance Scenarios http://or.nsfc.gov.cn/bitstream/00001903-5/417802/1/1000014103914.pdf

Image Captioning

  1. m-RNN模型《 Explain Images with Multimodal Recurrent Neural Networks》 2014 https://arxiv.org/pdf/1410.1090.pdf
  2. NIC模型 《Show and Tell: A Neural Image Caption Generator》2014
  3. MS Captivator From captions to visual concepts and back 2014
  4. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention 2015
  5. What Value Do Explicit High Level Concepts Have in Vision to Language Problems? 2016 https://arxiv.org/pdf/1506.01144.pdf
  6. Guiding Long-Short Term Memory for Image Caption Generation 2015 https://arxiv.org/pdf/1509.04942.pdf
  7. Watch What You Just Said: Image Captioning with Text-Conditional Attention 2016 https://arxiv.org/pdf/1606.04621.pdf https://github.com/LuoweiZhou/e2e-gLSTM-sc
  8. Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning 2017 https://arxiv.org/pdf/1612.01887.pdf
  9. Self-critical Sequence Training for Image Captioning 2017 CVPR https://arxiv.org/pdf/1612.00563.pdf
  10. Deep Reinforcement Learning-based Image Captioning with Embedding Reward 2017 cvpr https://arxiv.org/abs/1704.03899
  11. Knowing When to Look: Adaptive Attention via a Visual Sentinel for Image Captioning 2017 cvpr https://arxiv.org/pdf/1612.01887.pdf https://github.com/jiasenlu/AdaptiveAttention
  12. Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models, arXiv:1411.2539. https://arxiv.org/abs/1411.2539
  13. Berkeley Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description https://arxiv.org/abs/1411.4389
  14. UML / UT  Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, NAACL-HLT, 2015. https://arxiv.org/abs/1412.4729
  15. CMU / Microsoft Xinlei Chen, C. Lawrence Zitnick, Learning a Recurrent Visual Representation for Image Caption Generation https://arxiv.org/abs/1411.5654
  16. Xinlei Chen, C. Lawrence Zitnick, Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015 https://www.cs.cmu.edu/~xinleic/papers/cvpr15_rnn.pdf
  17. Facebook Remi Lebret, Pedro O. Pinheiro, Ronan Collobert, Phrase-based Image Captioning, arXiv:1502.03671 / ICML 2015 https://arxiv.org/abs/1502.03671
  18. UCLA / Baidu Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images https://arxiv.org/abs/1504.06692
  19. MS + Berkeley Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, Exploring Nearest Neighbor Approaches for Image Captioning https://arxiv.org/abs/1505.04467
  20. Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, Language Models for Image Captioning: The Quirks and What Works https://arxiv.org/abs/1505.01809
  21. Adelaide Qi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, Image Captioning with an Intermediate Attributes Layer https://arxiv.org/abs/1506.01144v1
  22. Tilburg Grzegorz Chrupala, Akos Kadar, Afra Alishahi, Learning language through pictures https://arxiv.org/abs/1506.03694
  23. Univ. Montreal Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks https://arxiv.org/abs/1507.01053
  24. Cornell Jack Hessel, Nicolas Savva, Michael J. Wilber, Image Representations and New Domains in Neural Image Captioning https://arxiv.org/abs/1508.02091
  25. MS + City Univ. of HongKong Ting Yao, Tao Mei, and Chong-Wah Ngo, "Learning Query and Image Similarities with Ranking Canonical Correlation Analysis", ICCV, 2015 https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Yao_Learning_Query_and_ICCV_2015_paper.pdf
  26. Mao J, Xu W, Yang Y, et al. Deep Captioning with Multimodal Recurrent Neural Networks (m- RNN) 2015. https://arxiv.org/abs/1412.6632
  27. Pan Y, Yao T, Li H, et al. Video Captioning with Transferred Semantic Attributes 2016. https://arxiv.org/abs/1611.07675
  28. Johnson J, Karpathy A, Li F F. DenseCap: Fully Convolutional Localization Networks for Dense Captioning https://arxiv.org/abs/1511.07571
  29. I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. NIPS 2014. https://arxiv.org/abs/1409.3215
  30. Karpathy A, Li F F. Deep Visual-Semantic Alignments for Generating Image Descriptions TPAMI 2015 https://arxiv.org/abs/1412.2306
  31. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. CVPR, 2014. http://cs.stanford.edu/people/karpathy/deepvideo/
  32. Yao T, Pan Y, Li Y, et al. Boosting Image Captioning with Attributes 2016. https://arxiv.org/abs/1611.01646
  33. Venugopalan S, Rohrbach M, Donahue J, et al. Sequence to Sequence -- Video to Text. 2015. https://arxiv.org/abs/1505.00487

Video Captioning

  1. Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.
    [http://jeffdonahue.com/lrcn/]
    [http://arxiv.org/pdf/1411.4389.pdf]
  2. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729. UT / UML / Berkeley [http://arxiv.org/pdf/1412.4729]
  3. Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861. Microsoft [http://arxiv.org/pdf/1505.01861]
  4. Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487. UT / Berkeley / UML [http://arxiv.org/pdf/1505.00487]
  5. Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Describing Videos by Exploiting Temporal Structure, arXiv:1502.08029 Univ. Montreal / Univ. Sherbrooke [http://arxiv.org/pdf/1502.08029.pdf]]
  6. Anna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698 MPI / Berkeley [http://arxiv.org/pdf/1506.01698.pdf]]
  7. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724 Univ. Toronto / MIT [[http://arxiv.org/pdf/1506.06724.pdf]]
  8. Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053 Univ. Montreal [http://arxiv.org/pdf/1507.01053.pdf]
  9. Dotan Kaufman, Gil Levi, Tal Hassner, Lior Wolf, Temporal Tessellation for Video Annotation and Summarization, arXiv:1612.06950. TAU / USC [[https://arxiv.org/pdf/1612.06950.pdf]]
  10. Chiori Hori, Takaaki Hori, Teng-Yok Lee, Kazuhiro Sumi, John R. Hershey, Tim K. Marks Attention-Based Multimodal Fusion for Video Description https://arxiv.org/abs/1701.03126
  11. Describing Videos using Multi-modal Fusion https://dl.acm.org/citation.cfm?id=2984065
  12. Andrew Shin ,  Katsunori Ohnishi , Tatsuya Harada Beyond caption to narrative: Video captioning with multiple sentences http://ieeexplore.ieee.org/abstract/document/7532983/
  13. Jianfeng Dong, Xirong Li, Cees G. M. Snoek Word2VisualVec: Image and Video to Sentence Matching by Visual Feature Prediction https://pdfs.semanticscholar.org/de22/8875bc33e9db85123469ef80fc0071a92386.pdf
  14. Multimodal Video Description https://dl.acm.org/citation.cfm?id=2984066
  15. Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, Eric P. Xing Recurrent Topic-Transition GAN for Visual Paragraph Generation https://arxiv.org/abs/1703.07022
  16. Weakly Supervised Dense Video Captioning(CVPR2017)
  17. Multi-Task Video Captioning with Video and Entailment Generation(ACL2017)

Visual Question Answering

  1. Kushal Kafle, and Christopher Kanan. An Analysis of Visual Question Answering Algorithms. arXiv:1703.09684, 2017. [https://arxiv.org/abs/1703.09684]
  2. Hyeonseob Nam, Jung-Woo Ha, Jeonghee Kim, Dual Attention Networks for Multimodal Reasoning and Matching, arXiv:1611.00471, 2016. [https://arxiv.org/abs/1611.00471]
  3. Jin-Hwa Kim, Kyoung Woon On, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang, Hadamard Product for Low-rank Bilinear Pooling, arXiv:1610.04325, 2016. [https://arxiv.org/abs/1610.04325]
  4. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, Marcus Rohrbach, Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding, arXiv:1606.01847, 2016. [https://arxiv.org/abs/1606.01847] [[code]][https://github.com/akirafukui/vqa-mcb]
  5. Kuniaki Saito, Andrew Shin, Yoshitaka Ushiku, Tatsuya Harada, DualNet: Domain-Invariant Network for Visual Question Answering. arXiv:1606.06108v1, 2016. [https://arxiv.org/pdf/1606.06108.pdf]
  6. Arijit Ray, Gordon Christie, Mohit Bansal, Dhruv Batra, Devi Parikh, Question Relevance in VQA: Identifying Non-Visual And False-Premise Questions, arXiv:1606.06622, 2016. [https://arxiv.org/pdf/1606.06622v1.pdf]
  7. Hyeonwoo Noh, Bohyung Han, Training Recurrent Answering Units with Joint Loss Minimization for VQA, arXiv:1606.03647, 2016. [http://arxiv.org/abs/1606.03647v1]
  8. Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh, Hierarchical Question-Image Co-Attention for Visual Question Answering, arXiv:1606.00061, 2016. [https://arxiv.org/pdf/1606.00061v2.pdf] [[code]][https://github.com/jiasenlu/HieCoAttenVQA]
  9. Jin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang, Multimodal Residual Learning for Visual QA, arXiv:1606.01455, 2016. [https://arxiv.org/pdf/1606.01455v1.pdf]
  10. Peng Wang, Qi Wu, Chunhua Shen, Anton van den Hengel, Anthony Dick, FVQA: Fact-based Visual Question Answering, arXiv:1606.05433, 2016. [https://arxiv.org/pdf/1606.05433.pdf]
  11. Ilija Ilievski, Shuicheng Yan, Jiashi Feng, A Focused Dynamic Attention Model for Visual Question Answering, arXiv:1604.01485. [https://arxiv.org/pdf/1604.01485v1.pdf]
  12. Yuke Zhu, Oliver Groth, Michael Bernstein, Li Fei-Fei, Visual7W: Grounded Question Answering in Images, CVPR 2016. [http://arxiv.org/abs/1511.03416]
  13. Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han, Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction, CVPR, 2016.[http://arxiv.org/pdf/1511.05756.pdf]
  14. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein, Learning to Compose Neural Networks for Question Answering, NAACL 2016. [http://arxiv.org/pdf/1601.01705.pdf]
  15. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein, Deep compositional question answering with neural module networks, CVPR 2016. [https://arxiv.org/abs/1511.02799]
  16. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola, Stacked Attention Networks for Image Question Answering, CVPR 2016. [http://arxiv.org/abs/1511.02274] [[code]][https://github.com/JamesChuanggg/san-torch]
  17. Kevin J. Shih, Saurabh Singh, Derek Hoiem, Where To Look: Focus Regions for Visual Question Answering, CVPR, 2015. [http://arxiv.org/pdf/1511.07394v2.pdf]
  18. Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, Ram Nevatia, ABC-CNN: An Attention Based Convolutional Neural Network for Visual Question Answering, arXiv:1511.05960v1, Nov 2015. [http://arxiv.org/pdf/1511.05960v1.pdf]
  19. Huijuan Xu, Kate Saenko, Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering, arXiv:1511.05234v1, Nov 2015. [http://arxiv.org/abs/1511.05234]
  20. Kushal Kafle and Christopher Kanan, Answer-Type Prediction for Visual Question Answering, CVPR 2016. [http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Kafle_Answer-Type_Prediction_for_CVPR_2016_paper.html]
  21. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, ICCV, 2015. [http://arxiv.org/pdf/1505.00468]
  22. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, ICCV, 2015. [http://arxiv.org/pdf/1505.00468] [[code]][https://github.com/JamesChuanggg/VQA-tensorflow]
  23. Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus, Simple Baseline for Visual Question Answering, arXiv:1512.02167v2, Dec 2015. [http://arxiv.org/abs/1512.02167]
  24. Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu, Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering, NIPS 2015. [http://arxiv.org/pdf/1505.05612.pdf]
  25. Mateusz Malinowski, Marcus Rohrbach, Mario Fritz, Ask Your Neurons: A Neural-based Approach to Answering Questions about Images, ICCV 2015. [http://arxiv.org/pdf/1505.01121v3.pdf]
  26. Mengye Ren, Ryan Kiros, Richard Zemel, Exploring Models and Data for Image Question Answering, ICML 2015. [http://arxiv.org/pdf/1505.02074.pdf]
  27. Mateusz Malinowski, Mario Fritz, Towards a Visual Turing Challe, NIPS Workshop 2015. [http://arxiv.org/abs/1410.8027]
  28. Mateusz Malinowski, Mario Fritz, A Multi-World Approach to Question Answering about Real-World Scenes based on Uncertain Input, NIPS 2014. [http://arxiv.org/pdf/1410.0210v4.pdf]
  29. Hedi Ben-younes, Remi Cadene, Matthieu Cord, Nicolas Thome: MUTAN: Multimodal Tucker Fusion for Visual Question Answering [https://arxiv.org/pdf/1705.06676.pdf] [[Code]][https://github.com/Cadene/vqa.pytorch]
  30. Jin-Hwa Kim, Kyoung Woon On, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang, Hadamard Product for Low-rank Bilinear Pooling, arXiv:1610.04325, 2016. [https://arxiv.org/abs/1610.04325]
  31. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, Marcus Rohrbach, Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding, arXiv:1606.01847, 2016. [https://arxiv.org/abs/1606.01847]

Edge Detection

  1. Saining Xie, Zhuowen Tu, Holistically-Nested Edge Detection Holistically-Nested Edge Detection [http://arxiv.org/pdf/1504.06375] [https://github.com/s9xie/hed]
  2. Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015. [http://arxiv.org/pdf/1412.1123]
  3. Wei Shen, Xinggang Wang, Yan Wang, Xiang Bai, Zhijiang Zhang, DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection, CVPR, 2015. [http://mc.eistar.net/UpLoadFiles/Papers/DeepContour_cvpr15.pdf]

Human Pose Estimation

  1. Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, CVPR, 2017.
  2. Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele, Deepcut: Joint subset partition and labeling for multi person pose estimation, CVPR, 2016.
  3. Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh, Convolutional pose machines, CVPR, 2016.
  4. Alejandro Newell, Kaiyu Yang, and Jia Deng, Stacked hourglass networks for human pose estimation, ECCV, 2016.
  5. Tomas Pfister, James Charles, and Andrew Zisserman, Flowing convnets for human pose estimation in videos, ICCV, 2015.
  6. Jonathan J. Tompson, Arjun Jain, Yann LeCun, Christoph Bregler, Joint training of a convolutional network and a graphical model for human pose estimation, NIPS, 2014.

Image Generation

  1. Aäron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, Koray Kavukcuoglu. "Conditional Image Generation with PixelCNN Decoders"[https://arxiv.org/pdf/1606.05328v2.pdf][https://github.com/kundan2510/pixelCNN]
  2. Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, "Learning to Generate Chairs with Convolutional Neural Networks", CVPR, 2015. [http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Dosovitskiy_Learning_to_Generate_2015_CVPR_paper.pdf]
  3. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra, "DRAW: A Recurrent Neural Network For Image Generation", ICML, 2015. [https://arxiv.org/pdf/1502.04623v2.pdf]
  4. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Generative Adversarial Networks, NIPS, 2014. [http://arxiv.org/abs/1406.2661]
  5. Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus, Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks, NIPS, 2015. [http://arxiv.org/abs/1506.05751]
  6. Lucas Theis, Aäron van den Oord, Matthias Bethge, "A note on the evaluation of generative models", ICLR 2016. [http://arxiv.org/abs/1511.01844]
  7. Zhenwen Dai, Andreas Damianou, Javier Gonzalez, Neil Lawrence, "Variationally Auto-Encoded Deep Gaussian Processes", ICLR 2016. [http://arxiv.org/pdf/1511.06455v2.pdf]
  8. Elman Mansimov, Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov, "Generating Images from Captions with Attention", ICLR 2016, [http://arxiv.org/pdf/1511.02793v2.pdf]
  9. Jost Tobias Springenberg, "Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks", ICLR 2016, [http://arxiv.org/pdf/1511.06390v1.pdf]
  10. Harrison Edwards, Amos Storkey, "Censoring Representations with an Adversary", ICLR 2016, [http://arxiv.org/pdf/1511.05897v3.pdf]
  11. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii, "Distributional Smoothing with Virtual Adversarial Training", ICLR 2016, [http://arxiv.org/pdf/1507.00677v8.pdf]
  12. Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros, "Generative Visual Manipulation on the Natural Image Manifold", ECCV 2016. [https://arxiv.org/pdf/1609.03552v2.pdf] [https://github.com/junyanz/iGAN] [https://youtu.be/9c4z6YsBGQ0]
  13. Alec Radford, Luke Metz, Soumith Chintala, "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", ICLR 2016. [http://arxiv.org/pdf/1511.06434.pdf]

课程

  1. 斯坦福视觉实验室主页:http://vision.stanford.edu/ 李飞飞组CS131, CS231A, CS231n 三个课程,可是说是最好的计算机视觉课程。
  2. CS 131 Computer Vision: Foundations and Applications: 基础知识:主要讲传统的边缘检测,特征点描述,相机标定,全景图拼接等知识 [http://vision.stanford.edu/teaching/cs131_fall1415/schedule.html]
  3. CS231A Computer Vision: from 3D reconstruction to recognition: [http://cvgl.stanford.edu/teaching/cs231a_winter1415/schedule.html]
  4. CS231n 2017: Convolutional Neural Networks for Visual Recognition 主要讲卷积神经网络的具体结构,各组成部分的原理优化以及各种应用。 [http://vision.stanford.edu/teaching/cs231n/] 国内地址:[http://www.bilibili.com/video/av13260183/]
  5. Stanford CS231n 2016 : Convolutional Neural Networks for Visual Recognition
  6. 1st Summer School on Deep Learning for Computer Vision Barcelona: (July 4-8, 2016)
  7. 2nd Summer School on Deep Learning for Computer VisionBarcelona (June 21-27, 2017) https://telecombcn-dl.github.io/2017-dlcv/
  8. 计算机视觉 课程讲义

Turorial

  1. Intro to Deep Learning for Computer Vision 2016 http://chaosmail.github.io/deeplearning/2016/10/22/intro-to-deep-learning-for-computer-vision/
  2. CVPR 2014 Tutorial on Deep Learning in Computer Vision https://sites.google.com/site/deeplearningcvpr2014/
  3. CVPR 2015  Applied Deep Learning for Computer Vision with Torch https://github.com/soumith/cvpr2015
  4. Deep Learning for Computer Vision – Introduction to Convolution Neural Networks http://www.analyticsvidhya.com/blog/2016/04/deep-learning-computer-vision-introduction-convolution-neural-networks/
  5. A Beginner's Guide To Understanding Convolutional Neural Networks [https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/]
  6. CVPR'17 Tutorial Deep Learning for Objects and Scenes by Kaiming He Ross Girshick http://deeplearning.csail.mit.edu/
  7. CVPR tutorial : Large-Scale Visual Recognition http://www.europe.naverlabs.com/Research/Computer-Vision/Highlights/CVPR-tutorial-Large-Scale-Visual-Recognition
  8. CVPR’16 Tutorial on Image Tag Assignment, Refinement and Retrieval http://www.lambertoballan.net/2016/06/cvpr16-tutorial-image-tag-assignment-refinement-and-retrieval/
  9. Tutorial on Answering Questions about Images with Deep Learning The tutorial was presented at '2nd Summer School on Integrating Vision and Language: Deep Learning' in Malta, 2016 [https://arxiv.org/abs/1610.01076]
  10. “Semantic Segmentation for Scene Understanding: Algorithms and Implementations" tutorial [ https://www.youtube.com/watch?v=pQ318oCGJGY]
  11. A tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the "echo state network" approach [http://minds.jacobs-university.de/sites/default/files/uploads/papers/ESNTutorialRev.pdf] [http://deeplearning.cs.cmu.edu/notes/shaoweiwang.pdf]
  12. Towards Good Practices for Recognition & Detection by Hikvision Research Institute. Supervised Data Augmentation (SDA) [http://image-net.org/challenges/talks/2016/Hikvision_at_ImageNet_2016.pdf]
  13. Generative Adversarial Networks by Ian Goodfellow, NIPS 2016 tutorial [ https://arxiv.org/abs/1701.00160] [http://www.iangoodfellow.com/slides/2016-12-04-NIPS.pdf]
  14. Deep Learning for Computer Vision – Introduction to Convolution Neural Networks [http://www.analyticsvidhya.com/blog/2016/04/deep-learning-computer-vision-introduction-convolution-neural-networks/]

图书

  1. 两本经典教材《Computer Vision: A Modern Approach》和《Computer Vision: Algorithms and Applications》,可以先读完第一本再读第二本。
  2. Computer Vision: A Modern Approach by David A. Forsyth, Jean Ponce 英文:http://cmuems.com/excap/readings/forsyth-ponce-computer-vision-a-modern-approach.pdf 中文:https://pan.baidu.com/s/1min99eK
  3. Computer Vision: Algorithms and Applications by Richard Szeliski  英文:http://szeliski.org/Book/drafts/SzeliskiBook_20100903_draft.pdf 中文:https://pan.baidu.com/s/1mhYGtio
  4. Computer Vision: Models, Learning, and Inference by Simon J.D. Prince 书的主页上还有配套的Slider, 代码,tutorial,演示等各种资源。 http://www.computervisionmodels.com/
  5. Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence by Matti Pietikäinen,Olli Silven 【2021干货书】人工智能的挑战:从机器学习和计算机视觉到情感智能,241页pdf。 [https://www.zhuanzhi.ai/vip/50e9f1f79b73461fb4931d271028369e)

相关期刊与会议

国际会议

  1. CVPR, Computer Vision and Pattern Recognition 
    CVPR 2017:http://cvpr2017.thecvf.com/
  2. ICCV, International Conference on Computer Vision 
    ICCV2017:http://iccv2017.thecvf.com/
  3. ECCV, European Conference on Computer Vision
  4. SIGGRAPH, Special Interest Group on Computer Graphics and Interactive techniques 
    SIGGRAPH2017 http://s2017.siggraph.org/
  5. ACM International Conference on Multimedia 
    ACMMM2017: http://www.acmmm.org/2017/
  6.  ICIP, International Conference on Image Processing http://2017.ieeeicip.org/

期刊

  1. ACM Transactions on Graphics
  2. IEEE Communications Surveys and Tutorials
  3. IEEE Signal Processing Magazine
  4. IEEE Transactions on EVOLUTIONARY COMPUTATION
  5. IEEE Transactions on GEOSCIENCE and REMOTE SENSING 2区
  6. IEEE Transactions on Pattern Analysis and Machine Intelligence
  7. NEUROCOMPUTING 2区
  8. Pattern Recognition Letters 2区
  9. Proceedings of the IEEE
  10. Signal image and Video Processing 4区
  11. IEEE journal on Selected areas in Communications 2区
  12. IEEE Transactions on image Processing 2区
  13. journal of Visual Communication and image Representation 3区
  14. Machine Vision and Application 3区
  15. Pattern Recognition 2区
  16. Signal Processing-image Communication 3区
  17. COMPUTER Vision and image UNDERSTANDING 3区
  18. IEEE Communications Surveys and Tutorials
  19. IET image Processing 4区
  20. Artificial Intelligence 2区
  21. Machine Learning 3区
  22. Medical image Analysis 2区

领域专家

北美

http://www.ics.uci.edu/~yyang8/extra/cv_giants.html

Other schools in no particular order:

  • UCB (Malik, Darrel, Efros)
  • UMD (Davis, Chellappa, Jacobs, Aloimonos, Doermann)
  • UIUC (Forsyth, Hoiem, Ahuja, Lazebnik)
  • UCSD (Kriegman)
  • UT-Austin (Aggarwal, Grauman)
  • Stanford (Fei-Fei Li, Savarese)
  • USC (Nevatia, Medioni)
  • Brown (Felzenszwalb, Hays, Sudderth)
  • NYU (Rob Fergus)
  • UC-Irvine (Fowlkes)
  • UNC (Tamara Berg, Alex Berg, Jan-Michael Frahm)
  • Columbia (Belhumeur, Shree Nayar, Shih-Fu Chang)
  • Washington (Seitz, Farhadi)
  • UMass, Amherst (Learned-Miller, Maji)
  • Cornell Tech (Belongie)
  • Virgina Tech (Batra, Parikh)
  • Princeton (Xiao)
  • Caltech (Perona)

http://www.it610.com/article/1770772.htm

组织机构与科研院所 THE Computer Vision Foundation 计算视觉基金会 → http://www.cv-foundation.org/ 
(a non-profit organization that fosters and supports research in all aspects of computer vision) Stanford Vision Lab 斯坦福大学视觉实验室 → http://vision.stanford.edu/ 
It focuses on two intimately connected branches of vision research: computer vision and human vision UC Berkeley Computer Vision Group → https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/ Face Recognition Homepage → http://www.face-rec.org/ USC Computer Vision 南加州大学计算机视觉实验室 → http://iris.usc.edu/USC-Computer-Vision.html 
The Computer Vision Laboratory at the University of Southern California ibug Intelligent behaviour understanding group → https://ibug.doc.ic.ac.uk/  The core expertise of the iBUG group is the machine analysis of human behaviour in space and time including face analysis, body gesture analysis, visual, audio, and multimodal analysis of human behaviour, and biometrics analysis.

datasets

Detection

  1. PASCAL VOC 2009 dataset Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasets
  2. LabelMe dataset LabelMe is a web-based image annotation tool that allows researchers to label images and share the annotations with the rest of the community. If you use the database, we only ask that you contribute to it, from time to time, by using the labeling tool.
  3. BioID Face Detection Database
  4. 1521 images with human faces, recorded under natural conditions, i.e. varying illumination and complex background. The eye positions have been set manually.
  5. CMU/VASC & PIE Face dataset
  6. Yale Face dataset
  7. Caltech Cars, Motorcycles, Airplanes, Faces, Leaves, Backgrounds
  8. Caltech 101 Pictures of objects belonging to 101 categories
  9. Caltech 256 Pictures of objects belonging to 256 categories
  10. Daimler Pedestrian Detection Benchmark 15,560 pedestrian and non-pedestrian samples (image cut-outs) and 6744 additional full images not containing pedestrians for bootstrapping. The test set contains more than 21,790 images with 56,492 pedestrian labels (fully visible or partially occluded), captured from a vehicle in urban traffic.
  11. MIT Pedestrian dataset CVC Pedestrian Datasets
  12. CVC Pedestrian Datasets CBCL Pedestrian Database
  13. MIT Face dataset CBCL Face Database
  14. MIT Car dataset CBCL Car Database
  15. MIT Street dataset CBCL Street Database
  16. INRIA Person Data Set A large set of marked up images of standing or walking people
  17. INRIA car dataset A set of car and non-car images taken in a parking lot nearby INRIA
  18. INRIA horse dataset A set of horse and non-horse images
  19. H3D Dataset 3D skeletons and segmented regions for 1000 people in images
  20. HRI RoadTraffic dataset A large-scale vehicle detection dataset
  21. BelgaLogos 10000 images of natural scenes, with 37 different logos, and 2695 logos instances, annotated with a bounding box.
  22. FlickrBelgaLogos 10000 images of natural scenes grabbed on Flickr, with 2695 logos instances cut and pasted from the BelgaLogos dataset.
  23. FlickrLogos-32 The dataset FlickrLogos-32 contains photos depicting logos and is meant for the evaluation of multi-class logo detection/recognition as well as logo retrieval methods on real-world images. It consists of 8240 images downloaded from Flickr.
  24. TME Motorway Dataset 30000+ frames with vehicle rear annotation and classification (car and trucks) on motorway/highway sequences. Annotation semi-automatically generated using laser-scanner data. Distance estimation and consistent target ID over time available.
  25. PHOS (Color Image Database for illumination invariant feature selection) Phos is a color image database of 15 scenes captured under different illumination conditions. More particularly, every scene of the database contains 15 different images: 9 images captured under various strengths of uniform illumination, and 6 images under different degrees of non-uniform illumination. The images contain objects of different shape, color and texture and can be used for illumination invariant feature detection and selection.
  26. CaliforniaND: An Annotated Dataset For Near-Duplicate Detection In Personal Photo Collections California-ND contains 701 photos taken directly from a real user's personal photo collection, including many challenging non-identical near-duplicate cases, without the use of artificial image transformations. The dataset is annotated by 10 different subjects, including the photographer, regarding near duplicates.
  27. USPTO Algorithm Challenge, Detecting Figures and Part Labels in Patents Contains drawing pages from US patents with manually labeled figure and part labels.
  28. Abnormal Objects Dataset Contains 6 object categories similar to object categories in Pascal VOC that are suitable for studying the abnormalities stemming from objects.
  29. Human detection and tracking using RGB-D camera Collected in a clothing store. Captured with Kinect (640*480, about 30fps)
  30. Multi-Task Facial Landmark (MTFL) dataset This dataset contains 12,995 face images collected from the Internet. The images are annotated with (1) five facial landmarks, (2) attributes of gender, smiling, wearing glasses, and head pose.
  31. WIDER FACE: A Face Detection Benchmark WIDER FACE dataset is a face detection benchmark dataset with images selected from the publicly available WIDER dataset. It contains 32,203 images and 393,703 face annotations.
  32. PIROPO Database: People in Indoor ROoms with Perspective and Omnidirectional cameras Multiple sequences recorded in two different indoor rooms, using both omnidirectional and perspective cameras, containing people in a variety of situations (people walking, standing, and sitting). Both annotated and non-annotated sequences are provided, where ground truth is point-based. In total, more than 100,000 annotated frames are available.

Classification

  1. PASCAL VOC 2009 dataset Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasets
  2. Caltech Cars, Motorcycles, Airplanes, Faces, Leaves, Backgrounds
  3. Caltech 101 Pictures of objects belonging to 101 categories
  4. Caltech 256 Pictures of objects belonging to 256 categories
  5. ETHZ Shape Classes A dataset for testing object class detection algorithms. It contains 255 test images and features five diverse shape-based classes (apple logos, bottles, giraffes, mugs, and swans).
  6. Flower classification data sets 17 Flower Category Dataset
  7. Animals with attributes A dataset for Attribute Based Classification. It consists of 30475 images of 50 animals classes with six pre-extracted feature representations for each image.
  8. Stanford Dogs Dataset Dataset of 20,580 images of 120 dog breeds with bounding-box annotation, for fine-grained image categorization.
  9. Video classification USAA dataset The USAA dataset includes 8 different semantic class videos which are home videos of social occassions which feature activities of group of people. It contains around 100 videos for training and testing respectively. Each video is labeled by 69 attributes. The 69 attributes can be broken down into five broad classes: actions, objects, scenes, sounds, and camera movement.
  10. McGill Real-World Face Video Database This database contains 18000 video frames of 640x480 resolution from 60 video sequences, each of which recorded from a different subject (31 female and 29 male).
  11. e-Lab Video Data Set Video data sets to train machines to recognise objects in our environment. e-VDS35 has 35 classes and a total of 2050 videos of roughly 10 seconds each.

Recognition

  1. Face and Gesture Recognition Working Group FGnet Face and Gesture Recognition Working Group FGnet
  2. Feret Face and Gesture Recognition Working Group FGnet
  3. PUT face 9971 images of 100 people
  4. Labeled Faces in the Wild A database of face photographs designed for studying the problem of unconstrained face recognition
  5. Urban scene recognition Traffic Lights Recognition, Lara's public benchmarks.
  6. PubFig: Public Figures Face Database The PubFig database is a large, real-world face dataset consisting of 58,797 images of 200 people collected from the internet. Unlike most other existing face datasets, these images are taken in completely uncontrolled situations with non-cooperative subjects.
  7. YouTube Faces The data set contains 3,425 videos of 1,595 different people. The shortest clip duration is 48 frames, the longest clip is 6,070 frames, and the average length of a video clip is 181.3 frames.
  8. MSRC-12: Kinect gesture data set The Microsoft Research Cambridge-12 Kinect gesture data set consists of sequences of human movements, represented as body-part locations, and the associated gesture to be recognized by the system.
  9. QMUL underGround Re-IDentification (GRID) Dataset This dataset contains 250 pedestrian image pairs + 775 additional images captured in a busy underground station for the research on person re-identification.
  10. Person identification in TV series Face tracks, features and shot boundaries from our latest CVPR 2013 paper. It is obtained from 6 episodes of Buffy the Vampire Slayer and 6 episodes of Big Bang Theory.
  11. ChokePoint Dataset ChokePoint is a video dataset designed for experiments in person identification/verification under real-world surveillance conditions. The dataset consists of 25 subjects (19 male and 6 female) in portal 1 and 29 subjects (23 male and 6 female) in portal 2.
  12. Hieroglyph Dataset Ancient Egyptian Hieroglyph Dataset.
  13. Rijksmuseum Challenge Dataset: Visual Recognition for Art Dataset Over 110,000 photographic reproductions of the artworks exhibited in the Rijksmuseum (Amsterdam, the Netherlands). Offers four automatic visual recognition challenges consisting of predicting the artist, type, material and creation year. Includes a set of baseline features, and offer a baseline based on state-of-the-art image features encoded with the Fisher vector.
  14. The OU-ISIR Gait Database, Treadmill Dataset Treadmill gait datasets composed of 34 subjects with 9 speed variations, 68 subjects with 68 subjects, and 185 subjects with various degrees of gait fluctuations.
  15. The OU-ISIR Gait Database, Large Population Dataset Large population gait datasets composed of 4,016 subjects.
  16. Pedestrian Attribute Recognition At Far Distance Large-scale PEdesTrian Attribute (PETA) dataset, covering more than 60 attributes (e.g. gender, age range, hair style, casual/formal) on 19000 images.
  17. FaceScrub Face Dataset The FaceScrub dataset is a real-world face dataset comprising 107,818 face images of 530 male and female celebrities detected in images retrieved from the Internet. The images are taken under real-world situations (uncontrolled conditions). Name and gender annotations of the faces are included.
  18. Depth-Based Person Identification Depth-Based Person Identification from Top View Dataset.

Tracking

  • Dataset-AMP: Luka Čehovin Zajc; Alan Lukežič; Aleš Leonardis; Matej Kristan. "Beyond Standard Benchmarks: Parameterizing Performance Evaluation in Visual Object Tracking." ICCV (2017). [paper]

  • Dataset-Nfs: Hamed Kiani Galoogahi, Ashton Fagg, Chen Huang, Deva Ramanan and Simon Lucey. "Need for Speed: A Benchmark for Higher Frame Rate Object Tracking." ICCV (2017) [paper] [supp] [project]

  • Dataset-DTB70: Siyi Li, Dit-Yan Yeung. "Visual Object Tracking for Unmanned Aerial Vehicles: A Benchmark and New Motion Models." AAAI (2017) [paper] [project] [dataset]

  • Dataset-UAV123: Matthias Mueller, Neil Smith and Bernard Ghanem. "A Benchmark and Simulator for UAV Tracking." ECCV (2016) [paper] [project] [dataset]

  • Dataset-TColor-128: Pengpeng Liang, Erik Blasch, Haibin Ling. "Encoding color information for visual tracking: Algorithms and benchmark." TIP (2015) [paper] [project] [dataset]

  • Dataset-NUS-PRO: Annan Li, Min Lin, Yi Wu, Ming-Hsuan Yang, and Shuicheng Yan. "NUS-PRO: A New Visual Tracking Challenge." PAMI (2015) [paper] [project] [Data_360(code:bf28)] [Data_baidu]] [View_360(code:515a)] [View_baidu]]

  • Dataset-PTB: Shuran Song and Jianxiong Xiao. "Tracking Revisited using RGBD Camera: Unified Benchmark and Baselines." ICCV (2013) [paper] [project] [5 validation] [95 evaluation]

  • Dataset-ALOV300+: Arnold W. M. Smeulders, Dung M. Chu, Rita Cucchiara, Simone Calderara, Afshin Dehghan, Mubarak Shah. "Visual Tracking: An Experimental Survey." PAMI (2014) [paper] [project] Mirror Link:ALOV300 Mirror Link:ALOV300

  • OTB2013: Wu, Yi, Jongwoo Lim, and Minghsuan Yang. "Online Object Tracking: A Benchmark." CVPR (2013). [paper]

  • OTB2015: Wu, Yi, Jongwoo Lim, and Minghsuan Yang. "Object Tracking Benchmark." TPAMI (2015). [paper] [project]

  • Dataset-VOT: [project]

[VOT13_paper_ICCV]The Visual Object Tracking VOT2013 challenge results

[VOT14_paper_ECCV]The Visual Object Tracking VOT2014 challenge results

[VOT15_paper_ICCV]The Visual Object Tracking VOT2015 challenge results

[VOT16_paper_ECCV]The Visual Object Tracking VOT2016 challenge results

[VOT17_paper_ECCV]The Visual Object Tracking VOT2017 challenge results

Segmentation

  1. Image Segmentation with A Bounding Box Prior dataset Ground truth database of 50 images with: Data, Segmentation, Labelling - Lasso, Labelling - Rectangle
  2. PASCAL VOC 2009 dataset Classification/Detection Competitions, Segmentation Competition, Person Layout Taster Competition datasets
  3. Motion Segmentation and OBJCUT data Cows for object segmentation, Five video sequences for motion segmentation
  4. Geometric Context Dataset Geometric Context Dataset: pixel labels for seven geometric classes for 300 images
  5. Crowd Segmentation Dataset This dataset contains videos of crowds and other high density moving objects. The videos are collected mainly from the BBC Motion Gallery and Getty Images website. The videos are shared only for the research purposes. Please consult the terms and conditions of use of these videos from the respective websites.
  6. CMU-Cornell iCoseg Dataset Contains hand-labelled pixel annotations for 38 groups of images, each group containing a common foreground. Approximately 17 images per group, 643 images total.
  7. Segmentation evaluation database 200 gray level images along with ground truth segmentations
  8. The Berkeley Segmentation Dataset and Benchmark Image segmentation and boundary detection. Grayscale and color segmentations for 300 images, the images are divided into a training set of 200 images, and a test set of 100 images.
  9. Weizmann horses 328 side-view color images of horses that were manually segmented. The images were randomly collected from the WWW.
  10. Saliency-based video segmentation with sequentially updated priors 10 videos as inputs, and segmented image sequences as ground-truth
  11. Daimler Urban Segmentation Dataset The dataset consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs. 500 frames come with pixel-level semantic class annotations into 5 classes: ground, building, vehicle, pedestrian, sky. Dense disparity maps are provided as a reference.
  12. DAVIS: Densely Annotated VIdeo Segmentation A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation.

Foreground/Background

  1. Wallflower Dataset For evaluating background modelling algorithms
  2. Foreground/Background Microsoft Cambridge Dataset Foreground/Background segmentation and Stereo dataset from Microsoft Cambridge
  3. Stuttgart Artificial Background Subtraction Dataset The SABS (Stuttgart Artificial Background Subtraction) dataset is an artificial dataset for pixel-wise evaluation of background models.
  4. Image Alpha Matting Dataset Image Alpha Matting Dataset.
  5. LASIESTA: Labeled and Annotated Sequences for Integral Evaluation of SegmenTation Algorithms LASIESTA is composed by many real indoor and outdoor sequences organized in diferent categories, each of one covering a specific challenge in moving object detection strategies.

Saliency Detection (source)

  1. AIM 120 Images / 20 Observers (Neil D. B. Bruce and John K. Tsotsos 2005).
  2. LeMeur 27 Images / 40 Observers (O. Le Meur, P. Le Callet, D. Barba and D. Thoreau 2006).
  3. Kootstra 100 Images / 31 Observers (Kootstra, G., Nederveen, A. and de Boer, B. 2008).
  4. DOVES 101 Images / 29 Observers (van der Linde, I., Rajashekar, U., Bovik, A.C., Cormack, L.K. 2009).
  5. Ehinger 912 Images / 14 Observers (Krista A. Ehinger, Barbara Hidalgo-Sotelo, Antonio Torralba and Aude Oliva 2009).
  6. NUSEF 758 Images / 75 Observers (R. Subramanian, H. Katti, N. Sebe1, M. Kankanhalli and T-S. Chua 2010).
  7. JianLi 235 Images / 19 Observers (Jian Li, Martin D. Levine, Xiangjing An and Hangen He 2011).
  8. Extended Complex Scene Saliency Dataset (ECSSD) ECSSD contains 1000 natural images with complex foreground or background. For each image, the ground truth mask of salient object(s) is provided.

Video Surveillance

  1. CAVIAR For the CAVIAR project a number of video clips were recorded acting out the different scenarios of interest. These include people walking alone, meeting with others, window shopping, entering and exitting shops, fighting and passing out and last, but not least, leaving a package in a public place.
  2. ViSOR ViSOR contains a large set of multimedia data and the corresponding annotations.
  3. CUHK Crowd Dataset 474 video clips from 215 crowded scenes, with ground truth on group detection and video classes.?
  4. TImes Square Intersection (TISI) Dataset A busy outdoor dataset for research on visual surveillance.
  5. Educational Resource Centre (ERCe) Dataset An indoor dataset collected from a university campus for physical event understanding of long video streams.
  6. PIROPO Database: People in Indoor ROoms with Perspective and Omnidirectional cameras Multiple sequences recorded in two different indoor rooms, using both omnidirectional and perspective cameras, containing people in a variety of situations (people walking, standing, and sitting). Both annotated and non-annotated sequences are provided, where ground truth is point-based. In total, more than 100,000 annotated frames are available.

Multiview

  1. 3D Photography Dataset Multiview stereo data sets: a set of images
  2. Multi-view Visual Geometry group's data set Dinosaur, Model House, Corridor, Aerial views, Valbonne Church, Raglan Castle, Kapel sequence
  3. Oxford reconstruction data set (building reconstruction) Oxford colleges
  4. Multi-View Stereo dataset (Vision Middlebury) Temple, Dino
  5. Multi-View Stereo for Community Photo Collections Venus de Milo, Duomo in Pisa, Notre Dame de Paris
  6. IS-3D Data Dataset provided by Center for Machine Perception
  7. CVLab dataset CVLab dense multi-view stereo image database
  8. 3D Objects on Turntable Objects viewed from 144 calibrated viewpoints under 3 different lighting conditions
  9. Object Recognition in Probabilistic 3D Scenes Images from 19 sites collected from a helicopter flying around Providence, RI. USA. The imagery contains approximately a full circle around each site.
  10. Multiple cameras fall dataset 24 scenarios recorded with 8 IP video cameras. The first 22 first scenarios contain a fall and confounding events, the last 2 ones contain only confounding events.
  11. CMP Extreme View Dataset 15 wide baseline stereo image pairs with large viewpoint change, provided ground truth homographies.
  12. KTH Multiview Football Dataset II This dataset consists of 8000+ images of professional footballers during a match of the Allsvenskan league. It consists of two parts: one with ground truth pose in 2D and one with ground truth pose in both 2D and 3D.
  13. Disney Research light field datasets This dataset includes: camera calibration information, raw input images we have captured, radially undistorted, rectified, and cropped images, depth maps resulting from our reconstruction and propagation algorithm, depth maps computed at each available view by the reconstruction algorithm without the propagation applied.
  14. CMU Panoptic Studio Dataset Multiple people social interaction dataset captured by 500+ synchronized video cameras, with 3D full body skeletons and calibration data.
  15. 4D Light Field Dataset 24 synthetic scenes. Available data per scene: 9x9 input images (512x512x3) , ground truth (disparity and depth), camera parameters, disparity ranges, evaluation masks.

Action

  1. UCF Sports Action Dataset This dataset consists of a set of actions collected from various sports which are typically featured on broadcast television channels such as the BBC and ESPN. The video sequences were obtained from a wide range of stock footage websites including BBC Motion gallery, and GettyImages.
  2. UCF Aerial Action Dataset This dataset features video sequences that were obtained using a R/C-controlled blimp equipped with an HD camera mounted on a gimbal.The collection represents a diverse pool of actions featured at different heights and aerial viewpoints. Multiple instances of each action were recorded at different flying altitudes which ranged from 400-450 feet and were performed by different actors.
  3. UCF YouTube Action Dataset It contains 11 action categories collected from YouTube.
  4. Weizmann action recognition Walk, Run, Jump, Gallop sideways, Bend, One-hand wave, Two-hands wave, Jump in place, Jumping Jack, Skip.
  5. UCF50 UCF50 is an action recognition dataset with 50 action categories, consisting of realistic videos taken from YouTube.
  6. ASLAN The Action Similarity Labeling (ASLAN) Challenge.
  7. MSR Action Recognition Datasets The dataset was captured by a Kinect device. There are 12 dynamic American Sign Language (ASL) gestures, and 10 people. Each person performs each gesture 2-3 times.
  8. KTH Recognition of human actions Contains six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several times by 25 subjects in four different scenarios: outdoors, outdoors with scale variation, outdoors with different clothes and indoors.
  9. Hollywood-2 Human Actions and Scenes dataset Hollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video in total.
  10. Collective Activity Dataset This dataset contains 5 different collective activities : crossing, walking, waiting, talking, and queueing and 44 short video sequences some of which were recorded by consumer hand-held digital camera with varying view point.
  11. Olympic Sports Dataset The Olympic Sports Dataset contains YouTube videos of athletes practicing different sports.
  12. SDHA 2010 Surveillance-type videos
  13. VIRAT Video Dataset The dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, background clutter, diversity in scenes, and human activity/event categories than existing action recognition datasets.
  14. HMDB: A Large Video Database for Human Motion Recognition Collected from various sources, mostly from movies, and a small proportion from public databases, YouTube and Google videos. The dataset contains 6849 clips divided into 51 action categories, each containing a minimum of 101 clips.
  15. Stanford 40 Actions Dataset Dataset of 9,532 images of humans performing 40 different actions, annotated with bounding-boxes.
  16. 50Salads dataset Fully annotated dataset of RGB-D video data and data from accelerometers attached to kitchen objects capturing 25 people preparing two mixed salads each (4.5h of annotated data). Annotated activities correspond to steps in the recipe and include phase (pre-/ core-/ post) and the ingredient acted upon.
  17. Penn Sports Action The dataset contains 2326 video sequences of 15 different sport actions and human body joint annotations for all sequences.
  18. CVRR-HANDS 3D A Kinect dataset for hand detection in naturalistic driving settings as well as a challenging 19 dynamic hand gesture recognition dataset for human machine interfaces.
  19. TUM Kitchen Data Set Observations of several subjects setting a table in different ways. Contains videos, motion capture data, RFID tag readings,...
  20. TUM Breakfast Actions Dataset
  21. This dataset comprises of 10 actions related to breakfast preparation, performed by 52 different individuals in 18 different kitchens.
  22. MPII Cooking Activities Dataset Cooking Activities dataset.
  23. GTEA Gaze+ Dataset This dataset consists of seven meal-preparation activities, each performed by 10 subjects. Subjects perform the activities based on the given cooking recipes.
  24. UTD-MHAD: multimodal human action recogniton dataset The dataset consists of four temporally synchronized data modalities. These modalities include RGB videos, depth videos, skeleton positions, and inertial signals (3-axis acceleration and 3-axis angular velocity) from a Kinect RGB-D camera and a wearable inertial sensor for a comprehensive set of 27 human actions.

Human pose/Expression

  1. AFEW (Acted Facial Expressions In The Wild)/SFEW (Static Facial Expressions In The Wild) Dynamic temporal facial expressions data corpus consisting of close to real world environment extracted from movies.
  2. Expression in-the-Wild (ExpW) Dataset Contains 91,793 faces manually labeled with expressions. Each of the face images was manually annotated as one of the seven basic expression categories: “angry”, “disgust”, “fear”, “happy”, “sad”, “surprise”, or “neutral”.
  3. ETHZ CALVIN Dataset CALVIN research group datasets
  4. HandNet (annotated depth images of articulating hands) This dataset includes 214971 annotated depth images of hands captured by a RealSense RGBD sensor of hand poses. Annotations: per pixel classes, 6D fingertip pose, heatmap. Images -> Train: 202198, Test: 10000, Validation: 2773. Recorded at GIP Lab, Technion.
  5. 3D Human Pose Estimation Depth videos + ground truth human poses from 2 viewpoints to improve 3D human pose estimation.

Medical

  1. VIP Laparoscopic / Endoscopic Dataset Collection of endoscopic and laparoscopic (mono/stereo) videos and images
  2. Mouse Embryo Tracking Database DB Contains 100 examples with the uncompressed frames, up to the 10th frame after the appearance of the 8th cell; a text file with the trajectories of all the cells, from appearance to division; a movie file showing the trajectories of the cells.
  3. FIRE Fundus Image Registration Dataset 134 retinal image pairs and ground truth for registration.

Misc

  1. Zurich Buildings Database ZuBuD Image Database contains over 1005 images about Zurich city building.
  2. Color Name Data Sets
  3. Mall dataset The mall dataset was collected from a publicly accessible webcam for crowd counting and activity profiling research.
  4. QMUL Junction Dataset A busy traffic dataset for research on activity analysis and behaviour understanding.
  5. Miracl-VC1 Miracl-VC1 is a lip-reading dataset including both depth and color images. Fifteen speakers positioned in the frustum of a MS Kinect sensor and utter ten times a set of ten words and ten phrases.
  6. NYU Symmetry Database The mirror symmetry database contains 176 single-symmetry and 63 multiple-symmetry images (.png files) with accompanying ground-truth annotations (.mat files).
  7. RGB-W: When Vision Meets Wireless Data with the wireless signal emitted by individuals' cell phones, referred to as RGB-W.

Challenge

  1. Microsoft COCO Image Captioning Challenge  https://competitions.codalab.org/competitions/3221
  2. ImageNet Large Scale Visual Recognition Challenge http://www.image-net.org/
  3. COCO 2017 Detection Challenge http://cocodataset.org/#detections-challenge2017
  4. Visual Domain Adaptation (VisDA2017) Segmentation Challenge https://competitions.codalab.org/competitions/17054
  5. The PASCAL Visual Object Classes Homepage http://host.robots.ox.ac.uk/pascal/VOC/
  6. YouTube-8M Large-Scale Video Understanding https://research.google.com/youtube8m/workshop.html
  7.  joint COCO and Places Challenge https://places-coco2017.github.io/
  8. Places Challenge 2017: Deep Scene Understanding is held jointly with COCO Challenge at ICCV'17 http://placeschallenge.csail.mit.edu/
  9. COCO Challenges. http://cocodataset.org/#home
  10. CBCL StreetScenes Challenge Framework http://cbcl.mit.edu/software-datasets/streetscenes/
  11. CoronARe: A Coronary Artery Reconstruction Challenge https://challenge.kitware.com/#phase/58d2925ecad3a532cfa20e37
  12. NUS-PRO: A New Visual Tracking Challenge http://www.visionbib.com/bibliography/journal/pam.html#PAMI(38)
  13. i-LIDS: Bag and vehicle detection challenge http://www.elec.qmul.ac.uk/staffinfo/andrea/avss2007_d.html
  14. The Action Similarity Labeling Challenge http://www.visionbib.com/bibliography/journal/pam.html#PAMI(34)
  15. The PASCAL Visual Object Classes Challenge 2012 http://www.visionbib.com/bibliography/journal-listeu.html#TT1663

创业公司

  1. 旷视科技:让机器看懂世界 [https://megvii.com/]
  2. 云从科技:源自计算机视觉之父的人脸识别技术 [http://www.cloudwalk.cn/]
  3. 格林深瞳:让计算机看懂世界 [http://www.deepglint.com/]
  4. 北京陌上花科技有限公司:人工智能计算机视觉引擎 [http://www.dressplus.cn/]
  5. 依图科技:与您一起构建计算机视觉的未来 [http://www.yitutech.com/]
  6. 码隆科技:最时尚的人工智能 [https://www.malong.com/]
  7. Linkface脸云科技:全球领先的人脸识别技术服务 [https://www.linkface.cn/]
  8. 速感科技:让机器人认识世界,用机器人改变世界 [http://www.qfeeltech.com/]
  9. 图森: 中国自动驾驶商业化领跑者 [http://www.tusimple.com/]
  10. Sense TIme商汤科技:教会计算机看懂这个世界 [https://www.sensetime.com/]
  11. 图普科技:专注于图像识别 [https://us.tuputech.com/?from=gz]
  12. 亮风台: 专注增强现实,引领人机交互 [https://www.hiscene.com/]
  13. 中科视拓 : 知人识面辨万物,开源赋能共发展 [http://www.seetatech.com/]

公众号

  1. 视觉求索 thevisionseeker
  2. 深度学习大讲堂 deeplearningclass
  3. VALSE valse_wechat

精品内容

《视觉中的Mamba:技术与应用》全面综述
专知会员服务
28+阅读 · 10月7日
《多模态大语言模型评估综述》
专知会员服务
28+阅读 · 8月29日
基础模型时代的图像分割研究综述
专知会员服务
20+阅读 · 8月26日
基于Transformer模型的数据模态转换综述
专知会员服务
32+阅读 · 8月17日
Mamba综述
专知会员服务
34+阅读 · 8月16日
基于计算机视觉的工业金属表面缺陷检测综述
专知会员服务
23+阅读 · 8月8日
计算机视觉中的公平性与偏差缓解:综述
专知会员服务
17+阅读 · 8月7日
基础语言模型在持续学习中的最新进展:综述
专知会员服务
34+阅读 · 6月9日
微信扫码咨询专知VIP会员