We present a greedy-based approach to construct an efficient single hidden layer neural network with the ReLU activation that approximates a target function. In our approach we obtain a shallow network by utilizing a greedy algorithm with the prescribed dictionary provided by the available training data and a set of possible inner weights. To facilitate the greedy selection process we employ an integral representation of the network, based on the ridgelet transform, that significantly reduces the cardinality of the dictionary and hence promotes feasibility of the greedy selection. Our approach allows for the construction of efficient architectures which can be treated either as improved initializations to be used in place of random-based alternatives, or as fully-trained networks in certain cases, thus potentially nullifying the need for backpropagation training. Numerical experiments demonstrate the tenability of the proposed concept and its advantages compared to the conventional techniques for selecting architectures and initializations for neural networks.
翻译:我们提出一种贪婪的办法,用RELU激活来构建高效的单一隐藏层神经网络,接近目标功能。在我们的方法中,我们通过利用现有培训数据和一系列可能的内部重量提供的指定字典而利用贪婪的算法获得一个浅网络。为了便利贪婪的选择过程,我们采用了基于脊椎变形的网络整体代表性,这大大降低了字典的基点,因而促进了贪婪选择的可行性。我们的方法允许建造高效的结构,这些结构可以被视为改进的初始化,以取代随机的替代方法,或者在某些情况下作为经过充分训练的网络,从而可能抵消了对反再造培训的需要。 数字实验表明拟议概念的可容性及其与选择神经网络架构和初始化的常规技术相比的优势。