Regression is one of the core problems tackled in supervised learning. Rectified linear unit (ReLU) neural networks generate continuous and piecewise-linear (CPWL) mappings and are the state-of-the-art approach for solving regression problems. In this paper, we propose an alternative method that leverages the expressivity of CPWL functions. In contrast to deep neural networks, our CPWL parameterization guarantees stability and is interpretable. Our approach relies on the partitioning of the domain of the CPWL function by a Delaunay triangulation. The function values at the vertices of the triangulation are our learnable parameters and identify the CPWL function uniquely. Formulating the learning scheme as a variational problem, we use the Hessian total variation (HTV) as regularizer to favor CPWL functions with few affine pieces. In this way, we control the complexity of our model through a single hyperparameter. By developing a computational framework to compute the HTV of any CPWL function parameterized by a triangulation, we discretize the learning problem as the generalized least absolute shrinkage and selection operator (LASSO). Our experiments validate the usage of our method in low-dimensional scenarios.
翻译:回归是受监督的学习中处理的核心问题之一。 校正线性单元神经网络生成连续和小线性线性绘图,是解决回归问题的最先进的方法。 在本文中,我们提出了一种利用CPWL功能的表达性的其他方法。 与深神经网络相比, 我们的CPWL参数化参数化保证了稳定性, 是可以解释的。 我们的方法依赖于通过Delaunay三角对CPWL函数域进行分割。 三角曲线上的函数值是我们的可学习参数, 并单独确定CPWL的功能。 将学习方案设计成一个变异问题, 我们用海珊完全变异(HTV)作为常规化工具, 支持CPWL的功能, 并用几片相近的片段。 这样, 我们的方法通过一个单一的超对准仪来控制我们模型的复杂程度。 通过开发一个计算框架, 将任何CPWL函数的 HTV 校准为我们可学习的参数, 并且将我们的三维级操作者选择的最小的参数化模型化( 我们的离离化方法) 。