Medical imaging deep learning models are often large and complex, requiring specialized hardware to train and evaluate these models. To address such issues, we propose the PocketNet paradigm to reduce the size of deep learning models by throttling the growth of the number of channels in convolutional neural networks. We demonstrate that, for a range of segmentation and classification tasks, PocketNet architectures produce results comparable to that of conventional neural networks while reducing the number of parameters by multiple orders of magnitude, using up to 90% less GPU memory, and speeding up training times by up to 40%, thereby allowing such models to be trained and deployed in resource-constrained settings.
翻译:医学成像深层学习模型往往规模大而复杂,需要专门硬件来培训和评估这些模型。为了解决这些问题,我们提议采用PocketNet模式,通过节减进化神经网络中频道数量的增长来缩小深层学习模型的规模。我们证明,对于一系列分解和分类任务,PocketNet结构产生的结果与常规神经网络类似,同时以多个数量级减少参数数量,使用90%的GPU内存,并将培训时间加快到40%,从而允许在资源紧缺的环境中培训和部署这些模型。