The well-designed structures in neural networks reflect the prior knowledge incorporated into the models. However, though different models have various priors, we are used to training them with model-agnostic optimizers such as SGD. In this paper, we propose to incorporate model-specific prior knowledge into optimizers by modifying the gradients according to a set of model-specific hyper-parameters. Such a methodology is referred to as Gradient Re-parameterization, and the optimizers are named RepOptimizers. For the extreme simplicity of model structure, we focus on a VGG-style plain model and showcase that such a simple model trained with a RepOptimizer, which is referred to as RepOpt-VGG, performs on par with or better than the recent well-designed models. From a practical perspective, RepOpt-VGG is a favorable base model because of its simple structure, high inference speed and training efficiency. Compared to Structural Re-parameterization, which adds priors into models via constructing extra training-time structures, RepOptimizers require no extra forward/backward computations and solve the problem of quantization. We hope to spark further research beyond the realms of model structure design. The code and models are publicly available at https://github.com/DingXiaoH/RepOptimizers.
翻译:神经网络中设计良好的结构反映了先前融入模型的知识。 但是,虽然不同的模型有不同的前科,但我们也用SGD等模型和最佳优化剂来训练这些模型。 在本文中,我们提议根据一套模型特定超参数来修改梯度,将特定模型的先前知识纳入优化器。 这种方法被称为“ 梯度再校准”, 优化器称为 RepOptimers。 对于模型结构的极端简单性,我们侧重于VGG式的普通模型,并展示一个简单模型,由REPOpt-VGGM 来培训,称为REPT-VGG, 与最近的设计模型相同或更好。从实际的角度来看,REPT-VGG是一个有利的基础模型,因为它的结构简单、高推算速度和培训效率。 与结构再校准器相比,它通过建立额外的培训时间结构,将之前的模型添加到模型中, Repopimizers 不需要额外的前向/后向后向后推进的模型, 与最近设计模型一样, 也不需要前向前向前向前向前向后向后向后向前推进的模型。 MAGGGGGGT的计算和可公开的模型。