Neural Networks (NNs) struggle to efficiently learn certain problems, such as parity problems, even when there are simple learning algorithms for those problems. Can NNs discover learning algorithms on their own? We exhibit a NN architecture that, in polynomial time, learns as well as any efficient learning algorithm describable by a constant-sized learning algorithm. For example, on parity problems, the NN learns as well as row reduction, an efficient algorithm that can be succinctly described. Our architecture combines both recurrent weight-sharing between layers and convolutional weight-sharing to reduce the number of parameters down to a constant, even though the network itself may have trillions of nodes. While in practice the constants in our analysis are too large to be directly meaningful, our work suggests that the synergy of Recurrent and Convolutional NNs (RCNNs) may be more powerful than either alone.
翻译:神经网络(NNs)努力有效地学习某些问题,例如平等问题,即使这些问题有简单的学习算法。即使网络本身可能有数万亿个节点,我们也能自己发现学习算法吗?我们展示了一个NNS结构,在多元时间里,学习以及任何有效的学习算法无法被一个不断规模的学习算法所取代。例如,关于平等问题,NNS学习和减排,这是一种可以简单描述的高效算法。我们的架构将层层之间经常的权重分担和革命性权重分享结合起来,将参数数量减少到一个不变的,即使网络本身可能有数万亿个节点。实际上,我们分析中的常数太大,无法直接产生意义,但我们的工作表明,经常性和革命性NN(RCNN)的协同作用可能比单独更强大。