Deep neural networks (DNNs) have shown to provide superb performance in many real life applications, but their large computation cost and storage requirement have prevented them from being deployed to many edge and internet-of-things (IoT) devices. Sparse deep neural networks, whose majority weight parameters are zeros, can substantially reduce the computation complexity and memory consumption of the models. In real-use scenarios, devices may suffer from large fluctuations of the available computation and memory resources under different environment, and the quality of service (QoS) is difficult to maintain due to the long tail inferences with large latency. Facing the real-life challenges, we propose to train a sparse model that supports multiple sparse levels. That is, a hierarchical structure of weights are satisfied such that the locations and the values of the non-zero parameters of the more-sparse sub-model area subset of the less-sparse sub-model. In this way, one can dynamically select the appropriate sparsity level during inference, while the storage cost is capped by the least sparse sub-model. We have verified our methodologies on a variety of DNN models and tasks, including the ResNet-50, PointNet++, GNMT, and graph attention networks. We obtain sparse sub-models with an average of 13.38% weights and 14.97% FLOPs, while the accuracies are as good as their dense counterparts. More-sparse sub-models with 5.38% weights and 4.47% of FLOPs, which are subsets of the less-sparse ones, can be obtained with only 3.25% relative accuracy loss.
翻译:深心神经网络(DNNS) 显示在许多真实生活中应用中提供超优性能, 但其庞大的计算成本和存储要求使得它们无法被部署到许多边缘和互联网(IoT)设备。 粗糙的深心神经网络,其多数重量参数为零, 能够大幅降低模型的计算复杂性和记忆消耗量。 在实际使用假设中, 设备可能在不同环境中受到现有计算和记忆资源的巨大波动, 服务的质量(QOS) 难以维持, 原因是长尾推推拉大。 面对现实生活中的挑战, 我们提议训练一个稀薄模型, 支持多种稀薄水平。 这就是说, 偏差的深心神经网络网络的等级结构可以大大降低模型中较粗的子模型区域中的非零参数的位置和值。 这样, 人们只能动态地选择发酵过程中的适当紧张程度, 而存储成本则由最不分散的子模型构成。 面对现实生活的重量挑战, 我们用我们的方法来校验了一个稀松的模型, 也就是13MFNFMT 的模型中的精度网络, 我们的深度模型中的精度为13MT 。