In convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially. Except randomly discarding regions or channels, many approaches try to overcome this defect by dropping influential units. In this paper, we propose a non-random dropout method named FocusedDropout, aiming to make the network focus more on the target. In FocusedDropout, we use a simple but effective way to search for the target-related features, retain these features and discard others, which is contrary to the existing methods. We found that this novel method can improve network performance by making the network more target-focused. Besides, increasing the weight decay while using FocusedDropout can avoid the overfitting and increase accuracy. Experimental results show that even a slight cost, 10\% of batches employing FocusedDropout, can produce a nice performance boost over the baselines on multiple datasets of classification, including CIFAR10, CIFAR100, Tiny Imagenet, and has a good versatility for different CNN models.
翻译:在革命神经网络(CNN)中,丢弃信息无法很好地发挥作用,因为丢弃信息并非完全模糊于具有空间相关特征的进化层中。除了随机丢弃区域或频道之外,许多方法都试图通过丢弃有影响力的单位来克服这一缺陷。在本文中,我们建议采用非随机丢弃方法FocusedDropout(FocusedDropout ), 目的是使网络更注重目标目标。在聚焦裁员中,我们使用简单而有效的方法搜索目标相关特征,保留这些特征并丢弃其他特征,这与现有方法相反。我们发现,这种新颖的方法可以通过使网络更注重目标来改进网络的性能。 此外,在使用聚焦裁剪时增加重量的衰变,可以避免过度适应并增加准确性。实验结果显示,即使使用FocedDropout(FocedDropout)的10个批次,也能在多个分类数据集的基线上产生良好的性能增强作用,包括CIFAR10、CIFAR100、小图像网,并且对不同的CNN模型具有良好的多功能。