Deep neural network training without pre-trained weights and few data is shown to need more training iterations. It is also known that, deeper models are more successful than their shallow counterparts for semantic segmentation task. Thus, we introduce EfficientSeg architecture, a modified and scalable version of U-Net, which can be efficiently trained despite its depth. We evaluated EfficientSeg architecture on Minicity dataset and outperformed U-Net baseline score (40% mIoU) using the same parameter count (51.5% mIoU). Our most successful model obtained 58.1% mIoU score and got the fourth place in semantic segmentation track of ECCV 2020 VIPriors challenge.
翻译:没有经过培训的重量和极少的数据的深神经网络培训证明需要更多的培训迭代。 人们也知道,在语义分割任务方面,更深的模型比浅层模型成功得多。 因此,我们引入了高效Seg结构,一个经过修改和可扩展的U-Net结构,尽管其深度可以进行有效的培训。我们用同样的参数计数(51.5% mIoU)评估了关于微型数据集的高效Seg结构,并用同样的参数计数(40% mIoU),超过了超好的U-Net基准分数(40% mIoU )。我们最成功的模型获得了58.1% mIoU分,并在ECCV 2020 VVIPrors挑战的语义分割轨道中获得了第四位。