The development of methods to guide the design of neural networks is an important open challenge for deep learning theory. As a paradigm for principled neural architecture design, we propose the translation of high-performing kernels, which are better-understood and amenable to first-principles design, into equivalent network architectures, which have superior efficiency, flexibility, and feature learning. To this end, we constructively prove that, with just an appropriate choice of activation function, any positive-semidefinite dot-product kernel can be realized as either the conjugate or neural tangent kernel of a fully-connected neural network with only one hidden layer. We verify our construction numerically and demonstrate its utility as a design tool for finite fully-connected networks in several experiments.
翻译:制定神经网络设计指导方法对于深层学习理论来说是一项重要的公开挑战。 作为有原则的神经结构设计的一个范例,我们建议将高性能内核转化为等效网络结构,这些内核更能理解并适合第一原则设计,具有更高的效率、灵活性和特色学习。 为此,我们建设性地证明,只要对激活功能作出适当的选择,任何正-西-不限点产品内核都可以作为完全连接的神经网络的共生或神经切核而实现,只有一层隐藏层。我们从数字上核查我们的构造,并展示其作为若干实验中有限连接的网络的设计工具的效用。