We define the notion of a continuously differentiable perfect learning algorithm for multilayer neural network architectures and show that such algorithms don't exist provided that the length of the data set exceeds the number of involved parameters and the activation functions are logistic, tanh or sin.
翻译:我们定义了多层神经网络结构的可持续区别的完美学习算法概念,并表明,如果数据集的长度超过所涉参数的数量,而激活功能是后勤的、隐蔽的或罪恶的,那么这种算法就不存在。