For basic machine learning problems, expected error is used to evaluate model performance. Since the distribution of data is usually unknown, we can make simple hypothesis that the data are sampled independently and identically distributed (i.i.d.) and the mean value of loss function is used as the empirical risk by Law of Large Numbers (LLN). This is known as the Monte Carlo method. However, when LLN is not applicable, such as imbalanced data problems, empirical risk will cause overfitting and might decrease robustness and generalization ability. Inspired by the framework of nonlinear expectation theory, we substitute the mean value of loss function with the maximum value of subgroup mean loss. We call it nonlinear Monte Carlo method. In order to use numerical method of optimization, we linearize and smooth the functional of maximum empirical risk and get the descent direction via quadratic programming. With the proposed method, we achieve better performance than SOTA backbone models with less training steps, and more robustness for basic regression and imbalanced classification tasks.
翻译:对于基本的机器学习问题,预期错误被用来评价模型性能。由于数据分布通常不为人知,我们可以作出简单的假设,即数据是独立和同样分布的抽样(i.d.),损失函数的平均值被法律大数字法(LLN)用作经验风险。这称为蒙特卡洛法。然而,当LLN不适用,例如数据不平衡问题,经验风险将导致过度适应,并可能降低稳健性和一般化能力。受非线性期望理论框架的启发,我们用分组平均损失的最大值来取代损失函数的平均值。我们称它为非线性蒙特卡洛法。为了使用数字优化方法,我们将最大经验风险的功能线性化和平滑,并通过四边式编程获得下降方向。我们采用拟议方法,比SOTA主干模型取得更好的业绩,培训步骤较少,基本回归和不平衡的分类任务则更加稳健。