An increasing number of models require the control of the spectral norm of convolutional layers of a neural network. While there is an abundance of methods for estimating and enforcing upper bounds on those during training, they are typically costly in either memory or time. In this work, we introduce a very simple method for spectral normalization of depthwise separable convolutions, which introduces negligible computational and memory overhead. We demonstrate the effectiveness of our method on image classification tasks using standard architectures like MobileNetV2.
翻译:越来越多的模型要求控制神经网络变异层的光谱规范,虽然在培训期间有多种方法来估计和执行这些光谱的上界,但通常在记忆或时间方面费用昂贵。在这项工作中,我们引入一种非常简单的方法,使深度相分离的光谱标准化,引入可忽略不计的计算和内存间接费用。我们用移动网络2等标准结构展示了我们在图像分类任务上的方法的有效性。