We study the training of finite-width two-layer smoothed ReLU networks for binary classification using the logistic loss. We show that gradient descent drives the training loss to zero if the initial loss is small enough. When the data satisfies certain cluster and separation conditions and the network is wide enough, we show that one step of gradient descent reduces the loss sufficiently that the first result applies.
翻译:我们用后勤损失来研究为二进制平滑ReLU网络进行二进制分类的培训。 我们显示,如果初始损失足够小,则梯度下降导致培训损失为零。 当数据满足某些组群和分离条件,而网络足够宽时,我们显示,梯度下降的一步足以减少损失,从而适用第一个结果。