We examine the zero-temperature Metropolis Monte Carlo algorithm as a tool for training a neural network by minimizing a loss function. We find that, as expected on theoretical grounds and shown empirically by other authors, Metropolis Monte Carlo can train a neural net with an accuracy comparable to that of gradient descent, if not necessarily as quickly. The Metropolis algorithm does not fail automatically when the number of parameters of a neural network is large. It can fail when a neural network's structure or neuron activations are strongly heterogenous, and we introduce an adaptive Monte Carlo algorithm, aMC, to overcome these limitations. The intrinsic stochasticity and numerical stability of the Monte Carlo method allow aMC to train deep neural networks and recurrent neural networks in which the gradient is too small or too large to allow training by gradient descent. Monte Carlo methods offer a complement to gradient-based methods for training neural networks, allowing access to a distinct set of network architectures and principles.
翻译:我们通过最大限度地减少损失功能来检查零温度大都会蒙特卡洛算法,以此作为培训神经网络的工具,我们发现,正如其他作者从理论角度所预期的和从经验上所显示的那样,蒙得卡洛大都会可以对神经网进行训练,其精度可与梯度下降的精确度相当,即使不一定如此迅速。当神经网络参数数量巨大时,大都会算法不会自动失败。当神经网络结构或神经激活非常不均时,它可能失败,我们引入适应性的蒙特卡洛算法(aMC)来克服这些限制。蒙特卡洛方法的内在随机性和数字稳定性使得一个机器能够对深度神经网络和经常性神经网络进行培训,而梯度下降幅度太小或过大,无法进行梯度下降培训。蒙特卡洛方法对基于梯度的神经网络培训方法提供了补充,允许访问一套独特的网络结构和原则。