Since its inception, deep learning has been overwhelmingly reliant on backpropagation and gradient-based optimization algorithms in order to learn weight and bias parameter values. Tractable Approximate Gaussian Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks. In this paper, we are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures. Although TAGI's computational efficiency is still below that of deterministic approaches relying on backpropagation, it outperforms them on classification tasks and matches their performance for information maximizing generative adversarial networks while using smaller architectures trained with fewer epochs.
翻译:自开始以来,深层次的学习主要依赖于反向宣传和梯度优化算法,以便学习重量和偏差参数值。可追溯性Apbear Gaussian Inference(TAGI)算法被证明是浅层完全连接的神经网络的反向分析的可行和可扩展的替代方法。本文展示了TAGI如何与反向分析的性能相匹配或超过反向分析的性能,用于培训经典的深层神经网络结构。尽管TAGI的计算效率仍然低于依赖反向分析的决定性方法,但它在分类任务上优于这些方法,在信息最大化基因对抗网络的同时,在使用经过较少地方训练的较小结构时,使其性能与信息最大化的基因对抗网络相匹配。