Neural networks are a convenient way to automatically fit functions that are too complex to be described by hand. The downside of this approach is that it leads to build a black-box without understanding what happened inside. Finding the preimage would help to better understand how and why such neural networks had given such outputs. Because most of the neural networks are noninjective function, it is often impossible to compute it entirely only by a numerical way. The point of this study is to give a method to compute the exact preimage of any Feed-Forward Neural Network with linear or piecewise linear activation functions for hidden layers. In contrast to other methods, this one is not returning a unique solution for a unique output but returns analytically the entire and exact preimage.
翻译:神经网络是自动匹配过于复杂且无法用手描述的功能的方便方式。 这种方法的缺点在于它导致在不理解内部发生的情况的情况下建立一个黑盒。 找到预映图将有助于更好地了解这些神经网络是如何和为什么产生这种输出的。 由于大多数神经网络都是非感应功能, 通常不可能完全用数字方式来计算。 本研究的要点是给出一种方法来计算任何Feed- Forward神经网络的准确预设, 其直线或线性线性激活功能用于隐藏层。 与其他方法不同, 这并不是为独特的输出返回一个独特的解决方案, 而是从分析角度返回整个和准确的预想。