Training a classifier under non-convex constraints has gotten increasing attention in the machine learning community thanks to its wide range of applications such as algorithmic fairness and class-imbalanced classification. However, several recent works addressing non-convex constraints have only focused on simple models such as logistic regression or support vector machines. Neural networks, one of the most popular models for classification nowadays, are precluded and lack theoretical guarantees. In this work, we show that overparameterized neural networks could achieve a near-optimal and near-feasible solution of non-convex constrained optimization problems via the project stochastic gradient descent. Our key ingredient is the no-regret analysis of online learning for neural networks in the overparameterization regime, which may be of independent interest in online learning applications.
翻译:机器学习界日益重视非分解制约下的培训分类人员,因为其应用范围广泛,例如算法公平和阶级平衡分类。然而,最近一些处理非分解制约的工作只侧重于简单的模型,如后勤回归或辅助矢量机器。神经网络是当今最受欢迎的分类模式之一,但被排除在外,缺乏理论保障。在这项工作中,我们表明超分神经网络可以通过项目随机梯度梯度脱落,实现近乎最佳和近似可行的非分解限制优化问题的解决方案。我们的关键内容是对超分度系统中神经网络的在线学习进行无限制分析,这可能对在线学习应用产生独立的兴趣。