Neural networks are easier to optimise when they have many more weights than are required for modelling the mapping from inputs to outputs. This suggests a two-stage learning procedure that first learns a large net and then prunes away connections or hidden units. But standard training does not necessarily encourage nets to be amenable to pruning. We introduce targeted dropout, a method for training a neural network so that it is robust to subsequent pruning. Before computing the gradients for each weight update, targeted dropout stochastically selects a set of units or weights to be dropped using a simple self-reinforcing sparsity criterion and then computes the gradients for the remaining weights. The resulting network is robust to post hoc pruning of weights or units that frequently occur in the dropped sets. The method improves upon more complicated sparsifying regularisers while being simple to implement and easy to tune.
翻译:当神经网络比模拟从输入到输出的映射所需的重量要高得多时,它们更容易优化。 这意味着一个两阶段的学习程序, 先学习一个大网, 然后再将连接或隐藏的单元缩小。 但标准培训并不一定鼓励蚊帐可以修剪。 我们引入了有针对性的辍学方法, 用于培训神经网络, 使其对随后的修剪更加稳健。 在计算每次重量更新的梯度之前, 目标辍学者先选择一组单位或重量, 然后再使用简单的自我强化宽度标准丢弃, 然后再计算剩余重量的梯度。 由此形成的网络对于在被丢弃的数组中经常出现的重量或单位进行临时修剪。 这种方法在更复杂的调压器上有所改进, 同时容易实施和容易调试。