Differentiable Graphics with TensorFlow 2.0

Deep learning has introduced a profound paradigm change in the recent years, allowing to solve significantly more complex perception problems than previously possible. This paradigm shift has positively impacted a tremendous number of fields with a giant leap forward in computer vision and computer graphics algorithms. The development of public libraries such as Tensorflow are in a large part responsible for the massive growth of AI. These libraries made deep learning easily accessible to every researchers and engineers allowing fast advances in developing deep learning techniques in the industry and academia. We will start this course with an introduction to deep learning and present the newly released TensorFlow 2.0 with a focus on best practices and new exciting functionalities. We will then show different tips, tools, and algorithms to visualize and interpret complex neural networks by using TensorFlow. Finally, we will introduce a novel TensorFlow library containing a set of graphics inspired differentiable layers allowing to build structured neural networks to solve various two and three dimensional perception tasks. To make the course interactive we will punctuate the presentations with real time demos in the form of Colab notebooks. Basic prior familiarity with deep learning will be assumed.** Deep learning has introduced a profound paradigm change in the recent years, allowing to solve significantly more complex perception problems than previously possible. This paradigm shift has positively impacted a tremendous number of fields with a giant leap forward in computer vision and computer graphics algorithms. The development of public libraries such as Tensorflow are in a large part responsible for the massive growth of AI. These libraries made deep learning easily accessible to every researchers and engineers allowing fast advances in developing deep learning techniques in the industry and academia. We will start this course with an introduction to deep learning and present the newly released TensorFlow 2.0 with a focus on best practices and new exciting functionalities. We will then show different tips, tools, and algorithms to visualize and interpret complex neural networks by using TensorFlow. Finally, we will introduce a novel TensorFlow library containing a set of graphics inspired differentiable layers allowing to build structured neural networks to solve various two and three dimensional perception tasks. To make the course interactive we will punctuate the presentations with real time demos in the form of Colab notebooks. Basic prior familiarity with deep learning will be assumed.

成为VIP会员查看完整内容
0
17

相关内容

SIGGRAPH(Special Interest Group for Computer GRAPHICS,计算机图形图像特别兴趣小组)成立于1967年,一直致力于推广和发展计算机绘图和动画制作的软硬件技术。从1974年开始,SIGGRAPH每年都会举办一次年会,而从1981年开始每年的年会还增加了CG(Computer Graphics,电脑绘图)展览。绝大部分计算机图技术软硬件厂商每年都会将最新研究成果拿到SIGGRAPH年会上发布,大部分游戏的电脑动画创作者也将他们本年度最杰出的艺术作品集中在SIGGRAPH上展示。因此,SIGGRAPH在图形图像技术,计算机软硬件以及CG等方面都有着相当的影响力。

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

0
10
下载
预览

This is an open sourced book on deep learning. This book is supposed to be mathematically light and caters to the readers who have no experience with deep learning or a strong mathematics background. This book is meant to help readers take their "First Step" towards Deep Learning.

成为VIP会员查看完整内容
0
29

Deep Learning in Computer Vision: Methods, Interpretation, Causation, and Fairness Deep learning models have succeeded at a variety of human intelligence tasks and are already being used at commercial scale. These models largely rely on standard gradient descent optimization of function parameterized by , which maps an input to an output . The optimization procedure minimizes the loss (difference) between the model output and actual output . As an example, in the cancer detection setting, is an MRI image, and is the presence or absence of cancer. Three key ingredients hint at the reason behind deep learning’s power: (1) deep architectures that are adept at breaking down complex functions into a composition of simpler abstract parts; (2) standard gradient descent methods that can attain local minima on a nonconvex Loss function that are close enough to the global minima; and (3) learning algorithms that can be executed on parallel computing hardware (e.g., graphics processing units), thus making the optimization viable over hundreds of millions of observations . Computer vision tasks, where the input is a high-dimensional image or video, are particularly suited to deep learning application. Recent advances in deep architectures (i.e., inception modules, attention networks, adversarial networks and DeepRL) have opened up completely new applications that were previously unexplored. However, the breakneck progress to replace human tasks with deep learning comes with caveats. These deep models tend to evade interpretation, lack causal relationships between input and output , and may inadvertently mimic not just human actions but also human biases and stereotypes. In this tutorial, we provide an intuitive explanation of deep learning methods in computer vision as well as limitations in practice.

成为VIP会员查看完整内容
0
44

简介: Tensorflow2.0 正式版,于10月1日,正式发布,关于Tensorflow 2.0的各种特性,你清楚了么?收集到了近期由Google公司的若干研究人员编写的教程PPT《Differentiable Graphics with TensorFlow 2.0》,里面详细阐述了如何用Tensorflow2.0来进行深度学习开发,并侧重图形学方向进行了详细讲解。

成为VIP会员查看完整内容
0
9

Convolutional Neural Networks (CNNs) have gained significant traction in the field of machine learning, particularly due to their high accuracy in visual recognition. Recent works have pushed the performance of GPU implementations of CNNs to significantly improve their classification and training times. With these improvements, many frameworks have become available for implementing CNNs on both CPUs and GPUs, with no support for FPGA implementations. In this work we present a modified version of the popular CNN framework Caffe, with FPGA support. This allows for classification using CNN models and specialized FPGA implementations with the flexibility of reprogramming the device when necessary, seamless memory transactions between host and device, simple-to-use test benches, and the ability to create pipelined layer implementations. To validate the framework, we use the Xilinx SDAccel environment to implement an FPGA-based Winograd convolution engine and show that the FPGA layer can be used alongside other layers running on a host processor to run several popular CNNs (AlexNet, GoogleNet, VGG A, Overfeat). The results show that our framework achieves 50 GFLOPS across 3x3 convolutions in the benchmarks. This is achieved within a practical framework, which will aid in future development of FPGA-based CNNs.

0
5
下载
预览
小贴士
相关VIP内容
一份循环神经网络RNNs简明教程,37页ppt
专知会员服务
104+阅读 · 2020年5月6日
开源书:PyTorch深度学习起步
专知会员服务
29+阅读 · 2019年10月11日
强化学习最新教程,17页pdf
专知会员服务
70+阅读 · 2019年10月11日
2019年机器学习框架回顾
专知会员服务
25+阅读 · 2019年10月11日
[综述]深度学习下的场景文本检测与识别
专知会员服务
41+阅读 · 2019年10月10日
TensorFlow 2.0 学习资源汇总
专知会员服务
42+阅读 · 2019年10月9日
机器学习相关资源(框架、库、软件)大列表
专知会员服务
22+阅读 · 2019年10月9日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
54+阅读 · 2019年10月9日
相关资讯
计算机 | 入门级EI会议ICVRIS 2019诚邀稿件
Call4Papers
10+阅读 · 2019年6月24日
(TensorFlow)实时语义分割比较研究
机器学习研究会
9+阅读 · 2018年3月12日
分布式TensorFlow入门指南
机器学习研究会
4+阅读 · 2017年11月28日
【推荐】用Tensorflow理解LSTM
机器学习研究会
34+阅读 · 2017年9月11日
【推荐】RNN/LSTM时序预测
机器学习研究会
22+阅读 · 2017年9月8日
【推荐】Python机器学习生态圈(Scikit-Learn相关项目)
机器学习研究会
5+阅读 · 2017年8月23日
【推荐】TensorFlow手把手CNN实践指南
机器学习研究会
5+阅读 · 2017年8月17日
【推荐】图像分类必读开创性论文汇总
机器学习研究会
14+阅读 · 2017年8月15日
【推荐】(Keras)LSTM多元时序预测教程
机器学习研究会
22+阅读 · 2017年8月14日
相关论文
A Survey on Trajectory Data Management, Analytics, and Learning
Sheng Wang,Zhifeng Bao,J. Shane Culpepper,Gao Cong
11+阅读 · 2020年3月25日
A Survey of Methods for Low-Power Deep Learning and Computer Vision
Abhinav Goel,Caleb Tung,Yung-Hsiang Lu,George K. Thiruvathukal
10+阅读 · 2020年3月24日
Heterogeneous Graph Transformer
Ziniu Hu,Yuxiao Dong,Kuansan Wang,Yizhou Sun
21+阅读 · 2020年3月3日
Neural Speech Synthesis with Transformer Network
Naihan Li,Shujie Liu,Yanqing Liu,Sheng Zhao,Ming Liu,Ming Zhou
5+阅读 · 2019年1月30日
Peter W. Battaglia,Jessica B. Hamrick,Victor Bapst,Alvaro Sanchez-Gonzalez,Vinicius Zambaldi,Mateusz Malinowski,Andrea Tacchetti,David Raposo,Adam Santoro,Ryan Faulkner,Caglar Gulcehre,Francis Song,Andrew Ballard,Justin Gilmer,George Dahl,Ashish Vaswani,Kelsey Allen,Charles Nash,Victoria Langston,Chris Dyer,Nicolas Heess,Daan Wierstra,Pushmeet Kohli,Matt Botvinick,Oriol Vinyals,Yujia Li,Razvan Pascanu
6+阅读 · 2018年10月17日
Sequential Attacks on Agents for Long-Term Adversarial Goals
Edgar Tretschk,Seong Joon Oh,Mario Fritz
4+阅读 · 2018年7月5日
Peter Anderson,Qi Wu,Damien Teney,Jake Bruce,Mark Johnson,Niko Sünderhauf,Ian Reid,Stephen Gould,Anton van den Hengel
5+阅读 · 2018年4月5日
Zhang-Wei Hong,Chen Yu-Ming,Shih-Yang Su,Tzu-Yun Shann,Yi-Hsiang Chang,Hsuan-Kung Yang,Brian Hsi-Lin Ho,Chih-Chieh Tu,Yueh-Chuan Chang,Tsu-Ching Hsiao,Hsin-Wei Hsiao,Sih-Pin Lai,Chun-Yi Lee
4+阅读 · 2018年3月18日
Matthias Plappert,Rein Houthooft,Prafulla Dhariwal,Szymon Sidor,Richard Y. Chen,Xi Chen,Tamim Asfour,Pieter Abbeel,Marcin Andrychowicz
3+阅读 · 2018年1月31日
Roberto DiCecco,Griffin Lacey,Jasmina Vasiljevic,Paul Chow,Graham Taylor,Shawki Areibi
5+阅读 · 2016年9月30日
Top