The Wide Residual Networks (Wide-ResNets), a shallow but wide model variant of the Residual Networks (ResNets) by stacking a small number of residual blocks with large channel sizes, have demonstrated outstanding performance on multiple dense prediction tasks. However, since proposed, the Wide-ResNet architecture has barely evolved over the years. In this work, we revisit its architecture design for the recent challenging panoptic segmentation task, which aims to unify semantic segmentation and instance segmentation. A baseline model is obtained by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime.
翻译:广余层网络(Wide-ResNets)是一个浅薄但宽广的剩余区块的模型变体,通过堆积大量频道大小的少量剩余区块,它显示了在多重密集预测任务方面的杰出表现,然而,自提议以来,广余层网络架构几年来几乎没有发展。在这项工作中,我们重新审视其最近具有挑战性的全光分割任务的架构设计,目的是统一语义分割和实例分割。通过将简单有效的挤出和可切换的可变突变网络纳入宽ResNet,获得了一个基线模型。通过调整宽度(即频道大小)和深度(即层数),从而导致SWideRNet(缩放宽广域网的短端)的组合,从而进一步提升或降低网络能力。我们证明,这样一个简单的缩放计划,加上网格搜索,确定了几个在快速模型和强型模型中都显著推进了全截段数据元状态运行的SWideRNets。