Deep neural networks have made significant progress in the field of computer vision. Recent studies have shown that depth, width and shortcut connections of neural network architectures play a crucial role in their performance. One of the most advanced neural network architectures, DenseNet, has achieved excellent convergence rates through dense connections. However, it still has obvious shortcomings in the usage of amount of memory. In this paper, we introduce a new type of pruning tool, threshold, which refers to the principle of the threshold voltage in MOSFET. This work employs this method to connect blocks of different depths in different ways to reduce the usage of memory. It is denoted as ThresholdNet. We evaluate ThresholdNet and other different networks on datasets of CIFAR10. Experiments show that HarDNet is twice as fast as DenseNet, and on this basis, ThresholdNet is 10% faster and 10% lower error rate than HarDNet.
翻译:最近的研究表明,神经网络结构的深度、宽度和捷径连接在其性能中发挥着关键作用。最先进的神经网络结构之一DenseNet通过密集连接实现了极佳的趋同率。然而,在记忆量的使用方面,它仍有明显的缺陷。在本文件中,我们引入了一种新的倾销工具,即阈值,它指的是MOSFET的阈值电压原则。这项工作采用这种方法,以不同的方式将不同深度的区块连接起来,以减少记忆的用量。它被称作 Shleshold Net 。我们评估了 CIFAR10 数据集上的临界网和其他不同网络。实验显示,HarDNet的速率是DenseNet的两倍,在此基础上,OlsholdNet比 HarDNet 的误差率快10%,低10%。