We propose a learning framework based on stochastic Bregman iterations, also known as mirror descent, to train sparse neural networks with an inverse scale space approach. We derive a baseline algorithm called LinBreg, an accelerated version using momentum, and AdaBreg, which is a Bregmanized generalization of the Adam algorithm. In contrast to established methods for sparse training the proposed family of algorithms constitutes a regrowth strategy for neural networks that is solely optimization-based without additional heuristics. Our Bregman learning framework starts the training with very few initial parameters, successively adding only significant ones to obtain a sparse and expressive network. The proposed approach is extremely easy and efficient, yet supported by the rich mathematical theory of inverse scale space methods. We derive a statistically profound sparse parameter initialization strategy and provide a rigorous stochastic convergence analysis of the loss decay and additional convergence proofs in the convex regime. Using only 3.4% of the parameters of ResNet-18 we achieve 90.2% test accuracy on CIFAR-10, compared to 93.6% using the dense network. Our algorithm also unveils an autoencoder architecture for a denoising task. The proposed framework also has a huge potential for integrating sparse backpropagation and resource-friendly training.
翻译:我们提议了一个基于Stochestic Bregman 迭代的学习框架,也称为镜底,以培养零星神经网络,采用反向空间方法。我们推出一个名为LinBreg的基线算法,这是一个利用动力加速的版本,AdaBreg, 这是一种对亚当算法的Bregman化概括化。与分散培训的既定方法相反,拟议算法构成一个神经网络的再增长战略,这种战略完全以优化为基础,没有额外的超感力学。我们的Bregman学习框架在培训开始时的初始参数非常少,仅增加重要参数以获得稀疏和直观的网络。拟议的方法非常简单和高效,但得到了大量反向空间方法数学理论的支持。我们提出了具有统计深度的稀释参数初始化战略,并对康韦克斯系统的损失衰败和额外的趋同证据进行了严格的分辨分析。我们只使用ResNet-18参数的3.4%的参数,在CIFAR-10上实现了90.2%的测试精确度,而使用密度网络则达到93.6%。我们的拟议方法也揭示了一个用于进行深密化和深层再分析的系统化的系统任务结构。