Even when neural networks are widely used in a large number of applications, they are still considered as black boxes and present some difficulties for dimensioning or evaluating their prediction error. This has led to an increasing interest in the overlapping area between neural networks and more traditional statistical methods, which can help overcome those problems. In this article, a mathematical framework relating neural networks and polynomial regression is explored by building an explicit expression for the coefficients of a polynomial regression from the weights of a given neural network, using a Taylor expansion approach. This is achieved for single hidden layer neural networks in regression problems. The validity of the proposed method depends on different factors like the distribution of the synaptic potentials or the chosen activation function. The performance of this method is empirically tested via simulation of synthetic data generated from polynomials to train neural networks with different structures and hyperparameters, showing that almost identical predictions can be obtained when certain conditions are met. Lastly, when learning from polynomial generated data, the proposed method produces polynomials that approximate correctly the data locally.
翻译:即使在大量应用中广泛使用神经网络时,它们仍被视为黑盒,对测量或评估预测误差造成一些困难。这导致对神经网络和较传统的统计方法之间的重叠领域的兴趣日益浓厚,有助于克服这些问题。在本条中,探讨一个与神经网络和多元回归有关的数学框架,方法是利用泰勒扩展方法,为某一神经网络重量的多元回归系数建立明确的表达法,用泰勒扩展方法。对于出现回归问题的单个隐性层神经网络,实现这一点。拟议方法的有效性取决于不同因素,如合成潜力的分布或选定的激活功能。该方法的性能是通过模拟从多面体生成的合成数据来模拟具有不同结构和超度参数的神经网络,以训练神经网络,表明满足某些条件时可以获得几乎相同的预测。最后,在从多面生成的数据中学习,拟议方法产生的多面值接近当地数据。