We present a novel hybrid algorithm for training Deep Neural Networks that combines the state-of-the-art Gradient Descent (GD) method with a Mixed Integer Linear Programming (MILP) solver, outperforming GD and variants in terms of accuracy, as well as resource and data efficiency for both regression and classification tasks. Our GD+Solver hybrid algorithm, called GDSolver, works as follows: given a DNN $D$ as input, GDSolver invokes GD to partially train $D$ until it gets stuck in a local minima, at which point GDSolver invokes an MILP solver to exhaustively search a region of the loss landscape around the weight assignments of $D$'s final layer parameters with the goal of tunnelling through and escaping the local minima. The process is repeated until desired accuracy is achieved. In our experiments, we find that GDSolver not only scales well to additional data and very large model sizes, but also outperforms all other competing methods in terms of rates of convergence and data efficiency. For regression tasks, GDSolver produced models that, on average, had 31.5% lower MSE in 48% less time, and for classification tasks on MNIST and CIFAR10, GDSolver was able to achieve the highest accuracy over all competing methods, using only 50% of the training data that GD baselines required.
翻译:我们提出了一个用于培训深神经网络的新混合算法,用于培训深神经网络,该算法将最先进的梯度底底(GDD)方法与混合整形线性线性编程(MILP)解答器、优于精度的GD和变方,以及用于回归和分类任务的资源和数据效率。我们的GD+Solver混合算法称为GDSolver,其效果如下:考虑到一个DNN$作为投入,GDSolver援引GD部分培训美元,直到它被困在当地迷你中,在这一点上,GDSolver援引一个MILP解答器,以彻底搜索围绕美元最后层参数重量分配的损耗区域,以隧道穿透和摆脱本地迷你任务为目标。在我们的实验中,我们发现GDSolver不仅比其他数据要大得多,而且比其他所有模型规模都要大,而且在数据精确率方面超越了所有其他相互竞争的方法。在回归任务中,GDSDSA 5 和 CFADR 5 标准中,在平均的排序中,在48DSADSADSAL 5 标准中,在平均标准中, 3ADSADSADSADSAL 5 5 标准中,在标准中,在标准中实现了平均要求的排序中,在标准中,在比为平均的排序中, 3ADSADSAFDSA5 3 3ADSAFDSAFDSE5 。