We give a complete characterisation of families of probability distributions that are invariant under the action of ReLU neural network layers. The need for such families arises during the training of Bayesian networks or the analysis of trained neural networks, e.g., in the context of uncertainty quantification (UQ) or explainable artificial intelligence (XAI). We prove that no invariant parametrised family of distributions can exist unless at least one of the following three restrictions holds: First, the network layers have a width of one, which is unreasonable for practical neural networks. Second, the probability measures in the family have finite support, which basically amounts to sampling distributions. Third, the parametrisation of the family is not locally Lipschitz continuous, which excludes all computationally feasible families. Finally, we show that these restrictions are individually necessary. For each of the three cases we can construct an invariant family exploiting exactly one of the restrictions but not the other two.
翻译:在ReLU神经网络层的行动下,我们给出了不同概率分布的家庭的完整特征,这种家庭的需求是在Bayesian网络培训或分析经过培训的神经网络过程中产生的,例如,在不确定性量化(UQ)或可解释的人工智能(XAI)方面。我们证明,除非以下三个限制中至少有一个限制,否则不存在任何不同概率分布的家庭:首先,网络层的宽度为一个,对实际神经网络来说是不合理的。第二,家庭的概率措施有有限的支持,这基本上相当于抽样分布。第三,家庭对称不是局部的Lipschitz连续,它排除了所有计算上可行的家庭。最后,我们证明这些限制是个别需要的。对于这三个限制中的每一个情况,我们可以建立一个完全利用限制之一而不是其他两个限制的不变的家庭。