Neural network forms the foundation of deep learning and numerous AI applications. Classical neural networks are fully connected, expensive to train and prone to overfitting. Sparse networks tend to have convoluted structure search, suboptimal performance and limited usage. We proposed the novel uniform sparse network (USN) with even and sparse connectivity within each layer. USN has one striking property that its performance is independent of the substantial topology variation and enormous model space, thus offers a search-free solution to all above mentioned issues of neural networks. USN consistently and substantially outperforms the state-of-the-art sparse network models in prediction accuracy, speed and robustness. It even achieves higher prediction accuracy than the fully connected network with only 0.55% parameters and 1/4 computing time and resources. Importantly, USN is conceptually simple as a natural generalization of fully connected network with multiple improvements in accuracy, robustness and scalability. USN can replace the latter in a range of applications, data types and deep learning architectures. We have made USN open source at https://github.com/datapplab/sparsenet.
翻译:古典神经网络完全连通,培训费用昂贵,而且容易过度装配。 松散网络往往具有混乱的结构搜索、不优化的性能和使用有限。 我们提议了新颖的统一的稀疏网络(USN),每层内连通程度均匀和稀少。 USN有一个惊人的特性,即其性能独立于巨大的地形变异和巨大的模型空间,因此为上述所有神经网络问题提供了一种无搜索的解决方案。USN在预测准确性、速度和稳健性方面一贯且大大超过最先进的稀有网络模型。它甚至比完全连通的网络的预测准确性更高,只有0.55 % 参数和 1/4 计算时间和资源。 关键是,USN在概念上简单,是完全连通的网络的自然概括性,在精确性、稳健健和可缩度方面有多个改进。USN可以在一系列的应用、数据类型和深层次学习结构中取代后者。我们在 https://github.com/datapplab/spararnety。