### 相关内容

【导读】三维深度学习对于处理真实场景数据具有重要的意义，3D理解对于很多应用程序都是至关重要的，比如自动驾驶汽车、自动机器人、虚拟现实和增强现实。来自UC San Diego的苏昊老师一直以来研究3D深度学习，他的一份3D Deep Learning教程，共有156页ppt，是学习了解三维深度学习的重要资料。

http://cseweb.ucsd.edu/~haosu/talks.html#_3d_deep_learning

3D理解对于很多应用都是至关重要的，比如自动驾驶汽车、自动机器人、虚拟现实和增强现实。与以常规像素阵列为主的二维图像不同，三维数据可以由激光雷达传感器等不规则的三维点云来表示。这对深入的架构设计提出了挑战。

http://cseweb.ucsd.edu/~haosu/

Part I: 3D Data, by Hao Su

Part II: Classification, by Hao Su

Part II: Segmentation & Detection, by Jiayuan Gu

Part III: 3D Data Synthesis, by Minghua Liu

【导读】随着近几年来AI技术的飞速发展，人们将计算机视觉技术应用于自动驾驶，使得自动驾驶的应用变得可行，很大程度地推进了自动驾驶技术的发展。本文介绍一篇关于自动驾驶计算机视觉的全面综述，覆盖了该领域相关的问题、数据集和最先进的技术。

• 简介
• 自动驾驶的历史
• 感知器
• 数据集和基准
• 目标检测
• 目标跟踪
• 语义分割
• 语义实体分割
• 立体
• 多视角3D重建
• 光流
• 3D场景流
• 地图、定位和Ego-Motion估计
• 场景理解
• 自动驾驶的端到端学习
• 总结

In this work we propose a new method for simultaneous object detection and 6DoF pose estimation. Unlike most recent techniques for CNN-based object detection and pose estimation, we do not base our approach on the common 2D counterparts, i.e. SSD and YOLO, but propose a new scheme. Instead of regressing 2D or 3D bounding boxes, we output full-sized 2D images containing multiclass object masks and dense 2D-3D correspondences. Having them at hand, a 6D pose is computed for each detected object using the PnP algorithm supplemented with RANSAC. This strategy allows for substantially better pose estimates due to a much higher number of relevant pose correspondences. Furthermore, the method is real-time capable, conceptually simple and not bound to any particular detection paradigms, such as R-CNN, SSD or YOLO. We test our method for single- and multiple-object pose estimation and compare the performance with the former state-of-the-art approaches. Moreover, we demonstrate how to use our pipeline when only synthetic renderings are available. In both cases, we outperform the former state-of-the-art by a large margin.

3D vehicle detection and tracking from a monocular camera requires detecting and associating vehicles, and estimating their locations and extents together. It is challenging because vehicles are in constant motion and it is practically impossible to recover the 3D positions from a single image. In this paper, we propose a novel framework that jointly detects and tracks 3D vehicle bounding boxes. Our approach leverages 3D pose estimation to learn 2D patch association overtime and uses temporal information from tracking to obtain stable 3D estimation. Our method also leverages 3D box depth ordering and motion to link together the tracks of occluded objects. We train our system on realistic 3D virtual environments, collecting a new diverse, large-scale and densely annotated dataset with accurate 3D trajectory annotations. Our experiments demonstrate that our method benefits from inferring 3D for both data association and tracking robustness, leveraging our dynamic 3D tracking dataset.

We propose a scalable, efficient and accurate approach to retrieve 3D models for objects in the wild. Our contribution is twofold. We first present a 3D pose estimation approach for object categories which significantly outperforms the state-of-the-art on Pascal3D+. Second, we use the estimated pose as a prior to retrieve 3D models which accurately represent the geometry of objects in RGB images. For this purpose, we render depth images from 3D models under our predicted pose and match learned image descriptors of RGB images against those of rendered depth images using a CNN-based multi-view metric learning approach. In this way, we are the first to report quantitative results for 3D model retrieval on Pascal3D+, where our method chooses the same models as human annotators for 50% of the validation images on average. In addition, we show that our method, which was trained purely on Pascal3D+, retrieves rich and accurate 3D models from ShapeNet given RGB images of objects in the wild.

Liuhao Ge,Zhou Ren,Yuncheng Li,Zehao Xue,Yingying Wang,Jianfei Cai,Junsong Yuan
15+阅读 · 2019年3月3日
Sergey Zakharov,Ivan Shugurov,Slobodan Ilic
5+阅读 · 2019年2月28日
Xuesong Li,Jose E Guivant,Ngaiming Kwok,Yongzhi Xu
7+阅读 · 2019年1月24日
Hou-Ning Hu,Qi-Zhi Cai,Dequan Wang,Ji Lin,Min Sun,Philipp Krähenbühl,Trevor Darrell,Fisher Yu
8+阅读 · 2018年12月2日
Alexander Grabner,Peter M. Roth,Vincent Lepetit
7+阅读 · 2018年3月30日
Martin Simon,Stefan Milz,Karl Amende,Horst-Michael Gross
3+阅读 · 2018年3月16日
Mustansar Fiaz,Arif Mahmood,Soon Ki Jung
9+阅读 · 2018年2月14日
Ju Yong Chang,Kyoung Mu Lee
3+阅读 · 2017年12月28日
Rohit Girdhar,Georgia Gkioxari,Lorenzo Torresani,Manohar Paluri,Du Tran
7+阅读 · 2017年12月26日
Top