Regularizers helped deep neural networks prevent feature co-adaptations. Dropout,as a commonly used regularization technique, stochastically disables neuron ac-tivations during network optimization. However, such complete feature disposal can affect the feature representation and network understanding. Toward betterdescriptions of latent representations, we present DropGraph that learns regularization function by constructing a stand-alone graph from the backbone features. DropGraph first samples stochastic spatial feature vectors and then incorporates graph reasoning methods to generate feature map distortions. This add-on graph regularizes the network during training and can be completely skipped during inference. We provide intuitions on the linkage between graph reasoning andDropout with further discussions on how partial graph reasoning method reduces feature correlations. To this end, we extensively study the modeling of graphvertex dependencies and the utilization of the graph for distorting backbone featuremaps. DropGraph was validated on four tasks with a total of 7 different datasets.The experimental results show that our method outperforms other state-of-the-art regularizers while leaving the base model structure unmodified during inference.
翻译:常规化器有助于深层神经网络防止特征共适应。 弃置器是一种常用的正规化技术, 是一种常用的正规化技术, 在网络优化期间, 将神经神经崩溃。 但是, 这种完整的特性处置会影响特征的显示和网络理解。 为了更好地描述潜在表示, 我们展示了 DrompGraph, 通过从主干特征中构建独立图形来学习正规化功能 。 DropGraph 初步样本将空间特征矢量纳入图形推理方法, 从而生成特征图扭曲 。 这个添加图在培训期间对网络进行规范, 并且可以在推断期间完全跳过 。 我们对图形推理和裁剪之间的链接进行直观, 并进一步讨论部分图形推理方法如何减少特征关联 。 为此, 我们广泛研究图形垂直依赖的建模和图用于扭曲主干特征图的利用情况。 DropGraph 通过总共7个不同的数据集验证了4项任务 。 实验结果显示, 我们的方法优于其他状态的正规化器, 同时让基本模型结构在变换时没有改变 。