Conditional Value-at-Risk ($\mathrm{CV@R}$) is one of the most popular measures of risk, which has been recently considered as a performance criterion in supervised statistical learning, as it is related to desirable operational features in modern applications, such as safety, fairness, distributional robustness, and prediction error stability. However, due to its variational definition, $\mathrm{CV@R}$ is commonly believed to result in difficult optimization problems, even for smooth and strongly convex loss functions. We disprove this statement by establishing noisy (i.e., fixed-accuracy) linear convergence of stochastic gradient descent for sequential $\mathrm{CV@R}$ learning, for a large class of not necessarily strongly-convex (or even convex) loss functions satisfying a set-restricted Polyak-Lojasiewicz inequality. This class contains all smooth and strongly convex losses, confirming that classical problems, such as linear least squares regression, can be solved efficiently under the $\mathrm{CV@R}$ criterion, just as their risk-neutral versions. Our results are illustrated numerically on such a risk-aware ridge regression task, also verifying their validity in practice.
翻译:条件值( mathrm{ CV@ R} $) 是风险最流行的衡量标准之一, 最近被认为是监督统计学习的一项业绩标准, 因为它与现代应用中安全、 公平、 分配稳健性和 预测错误稳定性等可取的操作性有关。 然而, 由于其定义不同, $\ mathrm{ CV@ R} 通常被认为会导致难以优化的问题, 甚至对于平稳和强烈的 convex 损失功能来说也是如此。 我们通过在连续的 $\ mathrm{ CV@R} 学习中确立随机梯度梯度的线性趋同( 即, 固定的- 准确性) 来反驳这一说法, 因为它与现代应用中的安全性、 公平性、 分布性强、 和 预测性错误稳定性有关。 但是, 由于其定义不同, $lmathrm{ CV@R} leveloplegations regresulationality expractive expractive express asure asureal express as.