"Deep Learning"/"Deep Neural Nets" is a technological marvel that is now increasingly deployed at the cutting-edge of artificial intelligence tasks. This dramatic success of deep learning in the last few years has been hinged on an enormous amount of heuristics and it has turned out to be a serious mathematical challenge to be able to rigorously explain them. In this thesis, submitted to the Department of Applied Mathematics and Statistics, Johns Hopkins University we take several steps towards building strong theoretical foundations for these new paradigms of deep-learning. In chapter 2 we show new circuit complexity theorems for deep neural functions and prove classification theorems about these function spaces which in turn lead to exact algorithms for empirical risk minimization for depth 2 ReLU nets. We also motivate a measure of complexity of neural functions to constructively establish the existence of high-complexity neural functions. In chapter 3 we give the first algorithm which can train a ReLU gate in the realizable setting in linear time in an almost distribution free set up. In chapter 4 we give rigorous proofs towards explaining the phenomenon of autoencoders being able to do sparse-coding. In chapter 5 we give the first-of-its-kind proofs of convergence for stochastic and deterministic versions of the widely used adaptive gradient deep-learning algorithms, RMSProp and ADAM. This chapter also includes a detailed empirical study on autoencoders of the hyper-parameter values at which modern algorithms have a significant advantage over classical acceleration based methods. In the last chapter 6 we give new and improved PAC-Bayesian bounds for the risk of stochastic neural nets. This chapter also includes an experimental investigation revealing new geometric properties of the paths in weight space that are traced out by the net during the training.
翻译:“ 深学习” / “ 深神经网” 是一个技术奇迹, 目前已越来越多地被应用到人工智能任务的前沿。 过去几年深层次学习的巨大成功取决于大量的劳作学, 并被证明是一个严肃的数学挑战, 以能够严格解释这些技术。 在提交给应用数学和统计部的论文中, 约翰·霍普金斯大学(Johns Hopkins University) 采取了若干步骤, 为这些深层学习的新模式建立坚实的理论基础。 在第二章中, 我们展示了深神经力功能的新电路复杂性理论, 并证明这些功能空间的分类理论, 而这又反过来导致精确的算法, 以尽量减少深度2 RELU 网的实验风险。 我们还激励了神经功能的复杂度, 以建设性的方式建立高复杂神经神经功能。 在第三章中, 我们给出了第一个算法, 可以在直线时间里对ReLU 的网门进行实时设置, 几乎是免费的。 在第四章中, 我们给出了严格的证据, 来解释关于深神经模型的轨学现象, 和直径直径直达的轨的轨演算法, 也能够将一个用于直径直径直达的解的轨变动的机, 。, 的解的解的解的解的轨法, 也包含级算法, 也包含进进进进进进的解,, 用于用于用于用于 的轨道的解的解的解的轨道的解的解的解的解的轨道的解的轨道的解的解的轨道的轨道的轨道,,,,,,,, 的轨迹路路, 的解到历史的解到直序, 也包含了。