The architecture of a deep neural network is defined explicitly in terms of the number of layers, the width of each layer and the general network topology. Existing optimisation frameworks neglect this information in favour of implicit architectural information (e.g. second-order methods) or architecture-agnostic distance functions (e.g. mirror descent). Meanwhile, the most popular optimiser in practice, Adam, is based on heuristics. This paper builds a new framework for deriving optimisation algorithms that explicitly leverage neural architecture. The theory extends mirror descent to non-convex composite objective functions: the idea is to transform a Bregman divergence to account for the non-linear structure of neural architecture. Working through the details for deep fully-connected networks yields automatic gradient descent: a first-order optimiser without any hyperparameters. Automatic gradient descent trains both fully-connected and convolutional networks out-of-the-box and at ImageNet scale. A PyTorch implementation is available at https://github.com/jxbz/agd and also in Appendix B. Overall, the paper supplies a rigorous theoretical foundation for a next-generation of architecture-dependent optimisers that work automatically and without hyperparameters.
翻译:本文提出了一种新的优化算法框架,利用神经网络架构来优化模型参数。在该框架下,深度神经网络的结构信息得到了显式的考虑。理论上,该框架扩展了非凸复合目标函数的镜像下降算法,通过转化Bregman分界点以考虑神经网络的非线性结构。细节已经在完全连接网络和卷积网络中展示,对较大的ImageNet数据集进行了实验,并提供了pyTorch代码实现。该文提供了下一代依赖架构优化器的严格理论基础,拥有自动运行和无超参数的优势。