转自:爱可可-爱生活
The goal of this course is to review currently available theories for deep learning and encourage better theoretical understanding of deep learning algorithms.
Readings
Deep Deep Trouble
Why 2016 is The Global Tipping Point...
Are AI and ML Killing Analyticals...
The Dark Secret at The Heart of AI
AI Robots Learning Racism...
FaceApp Forced to Pull ‘Racist' Filters...
Losing a Whole Generation of Young Men to Video Games
Readings
Emergence of simple cell
ImageNet Classification with Deep Convolutional Neural Networks (Alexnet)
Very Deep Convolutional Networks for Large-Scale Image Recognition (VGG)
Going Deeper with Convolutions (GoogLeNet)
Deep Residual Learning for Image Recognition (ResNet)
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Visualizing and Understanding Convolutional Neural Networks
Blogs
An Intuitive Guide to Deep Network Architectures
Neural Network Architectures
Videos
Deep Visualization Toolbox
Readings
A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction
Energy Propagation in Deep Convolutional Neural Networks
Discrete Deep Feature Extraction: A Theory and New Architectures
Topology Reduction in Deep Convolutional Feature Extraction Networks
Readings
A Probabilistic Framework for Deep Learning
Semi-Supervised Learning with the Deep Rendering Mixture Model
A Probabilistic Theory of Deep Learning
Readings
Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review
Learning Functions: When is Deep Better Than Shallow
Readings
Convolutional Patch Representations for Image Retrieval: an Unsupervised Approach
Convolutional Kernel Networks
Kernel Descriptors for Visual Recognition
End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
Learning with Kernels
Kernel Based Methods for Hypothesis Testing
Readings
Geometry of Neural Network Loss Surfaces via Random Matrix Theory
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
Nonlinear random matrix theory for deep learning
Readings
Deep Learning without Poor Local Minima
Topology and Geometry of Half-Rectified Network Optimization
Convexified Convolutional Neural Networks
Implicit Regularization in Matrix Factorization
Readings
Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
Perception as an inference problem
A Neurobiological Model of Visual Attention and Invariant Pattern Recognition Based on Dynamic Routing of Information
Readings
Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding
Convolutional Neural Networks Analyzed via Convolutional Sparse Coding
Multi-Layer Convolutional Sparse Modeling: Pursuit and Dictionary Learning
Convolutional Dictionary Learning via Local Processing
Emergence of simple cell by Olshausen and Field
Auto-Encoding Variational Bayes by Kingma and Welling
Generative Adversarial Networks by Goodfellow et al.
Understanding Deep Learning Requires Rethinking Generalization by Zhang et al.
Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy? by Giryes et al.
Robust Large Margin Deep Neural Networks by Sokolic et al.
Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems by Giryes et al.
Understanding Trainable Sparse Coding via Matrix Factorization by Moreau and Bruna
Why are Deep Nets Reversible: A Simple Theory, With Implications for Training by Arora et al.
Stable Recovery of the Factors From a Deep Matrix Product and Application to Convolutional Network by Malgouyres and Landsberg
Optimal Approximation with Sparse Deep Neural Networks by Bolcskei et al.
Convolutional Rectifier Networks as Generalized Tensor Decompositions by Cohen and Shashua
课程主页:
https://stats385.github.io/
视频连接:
https://www.researchgate.net/project/Theories-of-Deep-Learning
原文链接:
https://m.weibo.cn/1402400261/4171693540736036