With few exceptions, neural networks have been relying on backpropagation and gradient descent as the inference engine in order to learn the model parameters, because the closed-form Bayesian inference for neural networks has been considered to be intractable. In this paper, we show how we can leverage the tractable approximate Gaussian inference's (TAGI) capabilities to infer hidden states, rather than only using it for inferring the network's parameters. One novel aspect it allows is to infer hidden states through the imposition of constraints designed to achieve specific objectives, as illustrated through three examples: (1) the generation of adversarial-attack examples, (2) the usage of a neural network as a black-box optimization method, and (3) the application of inference on continuous-action reinforcement learning. These applications showcase how tasks that were previously reserved to gradient-based optimization approaches can now be approached with analytically tractable inference
翻译:除了少数例外情况,神经网络一直依赖背向反光和梯度下降作为推论引擎,以学习模型参数,因为认为神经网络的闭状贝耶斯-贝叶斯式贝叶斯推论是难以解决的。在本文中,我们展示了我们如何能够利用可移植近似高尔西亚推论(TAGI)的能力来推断隐藏状态,而不是仅仅利用它来推断网络参数。它允许的一个新颖的方面是,通过施加旨在达到具体目标的限制来推断隐蔽国家,如以下三个例子所示:(1) 生成对抗性攻击实例,(2) 使用神经网络作为黑盒优化方法,(3) 对持续行动强化学习应用推论。这些应用展示了以前用于梯度优化方法的任务现在如何通过可分析的引力推论来处理。