Deep Learning (DL) methods have emerged as one of the most powerful tools for functional approximation and prediction. While the representation properties of DL have been well studied, uncertainty quantification remains challenging and largely unexplored. Data augmentation techniques are a natural approach to provide uncertainty quantification and to incorporate stochastic Monte Carlo search into stochastic gradient descent (SGD) methods. The purpose of our paper is to show that training DL architectures with data augmentation leads to efficiency gains. We use the theory of scale mixtures of normals to derive data augmentation strategies for deep learning. This allows variants of the expectation-maximization and MCMC algorithms to be brought to bear on these high dimensional nonlinear deep learning models. To demonstrate our methodology, we develop data augmentation algorithms for a variety of commonly used activation functions: logit, ReLU, leaky ReLU and SVM. Our methodology is compared to traditional stochastic gradient descent with back-propagation. Our optimization procedure leads to a version of iteratively re-weighted least squares and can be implemented at scale with accelerated linear algebra methods providing substantial improvement in speed. We illustrate our methodology on a number of standard datasets. Finally, we conclude with directions for future research.
翻译:深度学习(DL)方法已成为功能近似和预测的最有力工具之一。虽然对DL的代表性特性进行了很好的研究,但不确定性的量化仍然具有挑战性,而且基本上尚未探索。数据增强技术是一种自然的方法,可以提供不确定性的量化,并将随机的蒙特卡洛搜索纳入随机梯度梯度下移方法。我们的文件目的是显示,用数据增强来培训DL结构的增强数据将带来效率的提高。我们使用正常的尺度混合物理论来为深层次学习制定数据增强战略。这样,就可以将这些高维非线性非线性深层学习模型应用到预期-最大和MCMC算法的变异。为了展示我们的方法,我们为各种常用的激活功能开发数据增强算法:逻辑、ReLU、泄漏ReLU和SVM。我们的方法与传统的随机梯度梯度梯度下降和反向调等比较。我们的优化程序可以产生一种迭代制再加权最小方形和可按加速线性等离子计算法进行。我们最终用标准方法来完成我们的数据。