Pruning the weights of randomly initialized neural networks plays an important role in the context of lottery ticket hypothesis. Ramanujan et al. (2020) empirically showed that only pruning the weights can achieve remarkable performance instead of optimizing the weight values. However, to achieve the same level of performance as the weight optimization, the pruning approach requires more parameters in the networks before pruning and thus more memory space. To overcome this parameter inefficiency, we introduce a novel framework to prune randomly initialized neural networks with iteratively randomizing weight values (IteRand). Theoretically, we prove an approximation theorem in our framework, which indicates that the randomizing operations are provably effective to reduce the required number of the parameters. We also empirically demonstrate the parameter efficiency in multiple experiments on CIFAR-10 and ImageNet.
翻译:随机初始神经网络的权重在彩票假设中起着重要作用。 Ramanujan等人(2020年)从经验上表明,只有调整重量才能取得显著的性能,而不是优化重量值。然而,为了达到与重量优化相同的性能水平,修剪方法要求在裁剪之前在网络中增加参数,从而增加记忆空间。为了克服这一效率低下的参数,我们引入了一个新框架,利用迭接随机权重值(IteRand)随机启动神经网络。理论上,我们证明了我们框架中的近似理论,这表明随机操作对于减少所需参数数量非常有效。我们还在对CIFAR-10和图像网络的多次实验中以实验方式展示参数效率。