Overparameterized deep learning networks have shown impressive performance in the area of automatic medical image segmentation. However, they achieve this performance at an enormous cost in memory, runtime, and energy. A large source of overparameterization in modern neural networks results from doubling the number of feature maps with each downsampling layer. This rapid growth in the number of parameters results in network architectures that require a significant amount of computing resources, making them less accessible and difficult to use. By keeping the number of feature maps constant throughout the network, we derive a new CNN architecture called PocketNet that achieves comparable segmentation results to conventional CNNs while using less than 3% of the number of parameters.
翻译:超度的深层学习网络在自动医学图像分割领域表现出了令人印象深刻的成绩,然而,它们以巨大的记忆、运行时间和能量成本实现了这一成绩。现代神经网络中一个巨大的超度参数化来源是每个降格层的地貌图数翻倍。参数数量的迅速增长导致网络结构需要大量计算资源,使其更难获得和难以使用。通过保持整个网络的地貌图数量不变,我们产生了一个新的CNN结构,称为PocketNet,在使用不到参数数量的3%的同时,实现与常规CNN的相似的分离结果。