Fully exploiting the learning capacity of neural networks requires overparameterized dense networks. On the other side, directly training sparse neural networks typically results in unsatisfactory performance. Lottery Ticket Hypothesis (LTH) provides a novel view to investigate sparse network training and maintain its capacity. Concretely, it claims there exist winning tickets from a randomly initialized network found by iterative magnitude pruning and preserving promising trainability (or we say being in trainable condition). In this work, we regard the winning ticket from LTH as the subnetwork which is in trainable condition and its performance as our benchmark, then go from a complementary direction to articulate the Dual Lottery Ticket Hypothesis (DLTH): Randomly selected subnetworks from a randomly initialized dense network can be transformed into a trainable condition and achieve admirable performance compared with LTH -- random tickets in a given lottery pool can be transformed into winning tickets. Specifically, by using uniform-randomly selected subnetworks to represent the general cases, we propose a simple sparse network training strategy, Random Sparse Network Transformation (RST), to substantiate our DLTH. Concretely, we introduce a regularization term to borrow learning capacity and realize information extrusion from the weights which will be masked. After finishing the transformation for the randomly selected subnetworks, we conduct the regular finetuning to evaluate the model using fair comparisons with LTH and other strong baselines. Extensive experiments on several public datasets and comparisons with competitive approaches validate our DLTH as well as the effectiveness of the proposed model RST. Our work is expected to pave a way for inspiring new research directions of sparse network training in the future. Our code is available at https://github.com/yueb17/DLTH.
翻译:充分利用神经网络的学习能力需要超量的密集网络。 在另一方面,直接培训稀疏的神经网络通常导致不令人满意的性能。 LTH(LTH)为调查分散的网络培训并保持其能力提供了一个新颖的视角。具体地说,它声称,通过迭代规模的运行和保存有希望的可培训性(或者我们说在可培训的条件下),从随机初始化的网络中找到的入场票可以赢得。在这项工作中,我们把LTH(LTH)的得分票视为具有可训练条件的子网络,其性能作为我们的基准,然后从一个补充方向向DTH(D)展示双彩色的轨迹变换(DLTH):从随机初始化的网络中随机选择的子网络可以转换成一个可训练的条件,并且与LTH(L)相比,随机选的入场票可以变成入场的入场票。具体地说,我们用统一的模式选择的子网络培训策略,即随机变版的网络变校程(RST),然后再用我们所选的轨道变校正的模型,我们的数据将开始,然后再调整。