Pixel-wise image segmentation is demanding task in computer vision. Classical U-Net architectures composed of encoders and decoders are very popular for segmentation of medical images, satellite images etc. Typically, neural network initialized with weights from a network pre-trained on a large data set like ImageNet shows better performance than those trained from scratch on a small dataset. In some practical applications, particularly in medicine and traffic safety, the accuracy of the models is of utmost importance. In this paper, we demonstrate how the U-Net type architecture can be improved by the use of the pre-trained encoder. Our code and corresponding pre-trained weights are publicly available at https://github.com/ternaus/TernausNet. We compare three weight initialization schemes: LeCun uniform, the encoder with weights from VGG11 and full network trained on the Carvana dataset. This network architecture was a part of the winning solution (1st out of 735) in the Kaggle: Carvana Image Masking Challenge.
翻译:由编码器和解码器组成的经典 U-Net 结构在医疗图像、卫星图像等的分解中非常受欢迎。 一般来说,神经网络的初始化,由在图像网等大型数据集上经过预先训练的网络的重量组成,其性能优于在小型数据集上受过从零到零培训的人员。在一些实际应用中,特别是在医药和交通安全方面,模型的准确性至关重要。在本文中,我们展示了如何通过使用预先训练的编码器来改进U-Net型结构。我们的代码和相应的预先训练重量可在https://github.com/ternaus/TernasNet上公开查阅。我们比较了三种加权初始化方案:LeCun制服、带有VGG11重量的编码器和在Carvana数据集上受过训练的全网络。这个网络结构是卡格格尔:Carvana图像蒙面挑战(Carvana Magging Challenge)。