The well-designed structures in neural networks reflect the prior knowledge incorporated into the models. However, though different models have various priors, we are used to training them with model-agnostic optimizers such as SGD. In this paper, we propose to incorporate model-specific prior knowledge into optimizers by modifying the gradients according to a set of model-specific hyper-parameters. Such a methodology is referred to as Gradient Re-parameterization, and the optimizers are named RepOptimizers. For the extreme simplicity of model structure, we focus on a VGG-style plain model and showcase that such a simple model trained with a RepOptimizer, which is referred to as RepOpt-VGG, performs on par with or better than the recent well-designed models. From a practical perspective, RepOpt-VGG is a favorable base model because of its simple structure, high inference speed and training efficiency. Compared to Structural Re-parameterization, which adds priors into models via constructing extra training-time structures, RepOptimizers require no extra forward/backward computations and solve the problem of quantization. We hope to spark further research beyond the realms of model structure design. Code and models \url{https://github.com/DingXiaoH/RepOptimizers}.
翻译:神经网络中设计良好的结构反映了先前融入模型的知识。 然而,尽管不同的模型有不同的前科,但我们还是用SGD等模型性优化器来培训这些模型。 在本文中,我们提议根据一套模型性超参数修改梯度,将模型性先前知识纳入优化器。 这种方法被称为 " 梯度再校准 ",优化器称为 " Repoptiphers " 。 对于模型结构的极端简单性,我们侧重于VGG式的普通模型,并展示一个简单模型,由REPOpt-VGGS(称为Repot-VGG)来培训,与最近的设计模型相同或更好。从实际角度看,Rept-VGG是一个有利的基础模型,因为其结构简单、高推算速度和培训效率。 与结构再校准器相比,通过建立额外的培训时间结构,将先前的模型添加为模型,Repopimizers不需要前方/后方/后方推式的模型。 我们希望/后推式/后推式的模型需要超前方/后方推/方推式模型。 标准的代码计算和方码设计问题。