Feature normalization transforms such as Batch and Layer-Normalization have become indispensable ingredients of state-of-the-art deep neural networks. Recent studies on fine-tuning large pretrained models indicate that just tuning the parameters of these affine transforms can achieve high accuracy for downstream tasks. These findings open the questions about the expressive power of tuning the normalization layers of frozen networks. In this work, we take the first step towards this question and show that for random ReLU networks, fine-tuning only its normalization layers can reconstruct any target network that is $O(\sqrt{\text{width}})$ times smaller. We show that this holds even for randomly sparsified networks, under sufficient overparameterization, in agreement with prior empirical work.
翻译:在这项工作中,我们迈出了第一步来解决这个问题,并表明对于随机的ReLU网络来说,只有对其常规化层进行微调,才能重建任何目标网络,其金额小于O(sqrt\ text{width}}$1倍。我们表明,如果与先前的经验性工作一致,这种调整甚至可以随机地维持电磁网络,但没有足够的超度分度。