In the context of regularized loss minimization with polyhedral gauges, we show that for a broad class of loss functions (possibly non-smooth and non-convex) and under a simple geometric condition on the input data it is possible to efficiently identify a subset of features which are guaranteed to have zero coefficients in all optimal solutions in all problems with loss functions from said class, before any iterative optimization has been performed for the original problem. This procedure is standalone, takes only the data as input, and does not require any calls to the loss function. Therefore, we term this procedure as a persistent reduction for the aforementioned class of regularized loss minimization problems. This reduction can be efficiently implemented via an extreme ray identification subroutine applied to a polyhedral cone formed from the datapoints. We employ an existing output-sensitive algorithm for extreme ray identification which makes our guarantee and algorithm applicable in ultra-high dimensional problems.
翻译:在以多面计量器对损失进行常规化最小化的情况下,我们表明,对于一个广泛的损失类别(可能非光滑和非光栅)和输入数据的一个简单的几何条件下,在对原始问题进行任何迭代优化之前,可以有效地确定在涉及上述类别损失功能的所有问题的所有最佳解决办法中保证零系数的一组特征,而这一程序是独立的,仅将数据作为输入,不需要对损失功能作任何调用。因此,我们将这一程序称为持续减少上述类别正常化损失最小化问题,通过对数据点形成的多面交点应用极端射线识别子路程,可以有效地实施这一减少。我们采用了一种对输出敏感的极端射线识别算法,使我们的保证和算法适用于超高维问题。