We describe a gradient-based method to discover local error maximizers of a deep neural network (DNN) used for regression, assuming the availability of an "oracle" capable of providing real-valued supervision (a regression target) for samples. For example, the oracle could be a numerical solver which, operationally, is much slower than the DNN. Given a discovered set of local error maximizers, the DNN is either fine-tuned or retrained in the manner of active learning.
翻译:我们描述一种基于梯度的方法,用以发现用于回归的深神经网络(DNN)的局部误差最大化,假设有一个“魔雷”能够为样本提供真实价值的监督(回归目标),例如,神器可以是数字求解器,在操作上比DNN慢得多。鉴于已发现的一组本地误差最大化器,DNN要么经过微调,要么在积极学习方面得到再培训。