Beside the minimization of the prediction error, two of the most desirable properties of a regression scheme are stability and interpretability. Driven by these principles, we propose continuous-domain formulations for one-dimensional regression problems. In our first approach, we use the Lipschitz constant as a regularizer, which results in an implicit tuning of the overall robustness of the learned mapping. In our second approach, we control the Lipschitz constant explicitly using a user-defined upper-bound and make use of a sparsity-promoting regularizer to favor simpler (and, hence, more interpretable) solutions. The theoretical study of the latter formulation is motivated in part by its equivalence, which we prove, with the training of a Lipschitz-constrained two-layer univariate neural network with rectified linear unit (ReLU) activations and weight decay. By proving representer theorems, we show that both problems admit global minimizers that are continuous and piecewise-linear (CPWL) functions. Moreover, we propose efficient algorithms that find the sparsest solution of each problem: the CPWL mapping with the least number of linear regions. Finally, we illustrate numerically the outcome of our formulations.
翻译:除了尽量减少预测错误之外,回归计划最可取的两个特性是稳定性和可解释性。受这些原则驱动,我们提出一维回归问题的连续法系配方。在我们的第一种方法中,我们使用Lipschitz常量作为常规化器,从而对所学绘图的总体稳健性进行隐含的调整。在第二种方法中,我们控制Lipschitz常量,明确使用用户定义的上下限,并使用宽度促进正规化法,以有利于更简单(和因此更易解释)的解决办法。对后一种配方的理论研究,部分出于对等性的动机。我们证明,通过培训利普西茨受限制的两层单向神经网络,其直线性单元(ReLU)被激活和重量衰减。我们通过演示标注符,表明这两个问题都吸收了持续和小线(CPWL)功能的全球最小最小最小值。此外,我们提出了找到每个问题最稀少的解决方案的有效算法:我们用最小的线性区域数字显示我们的数字结果。