We introduce a new approach for capturing model uncertainty for neural networks (NNs) in regression, which we call Neural Optimization-based Model Uncertainty (NOMU). The main idea of NOMU is to design a network architecture consisting of two connected sub-networks, one for the model prediction and one for the model uncertainty, and to train it using a carefully designed loss function. With this design, NOMU can provide model uncertainty for any given (previously trained) NN by plugging it into the framework as the sub-network used for model prediction. NOMU is designed to yield uncertainty bounds (UBs) that satisfy four important desiderata regarding model uncertainty, which established methods often do not satisfy. Furthermore, our UBs are themselves representable as a single NN, which leads to computational cost advantages in applications such as Bayesian optimization. We evaluate NOMU experimentally in multiple settings. For regression, we show that NOMU performs as well as or better than established benchmarks. For Bayesian optimization, we show that NOMU outperforms all other benchmarks.
翻译:我们引入了一种新的方法来捕捉神经网络在回归中的模型不确定性,我们称之为神经优化模型不确定性。NOMU的主要想法是设计一个由两个连接的子网络组成的网络结构,一个网络用于模型预测,一个网络用于模型不确定性,一个网络用于模型不确定性,另一个网络用于模型不确定性,并使用精心设计的损失函数对其进行培训。有了这一设计,NOMU可以作为用于模型预测的子网络插入给定的(以前受过训练的)NNN(作为用于模型预测的子网络)的框架,为任何给定的(过去受过训练的)NW提供模型不确定性。NOMU旨在产生四个重要的不确定性(UBs),在模型不确定性方面满足四个重要的偏差,而已经确立的方法往往无法满足。此外,我们的UBs本身可以作为一个单一的NNW,从而导致在诸如Bayesian优化等应用中计算成本优势。我们在多个环境中对NOMU进行实验性评估。关于回归,我们显示NOMU的表现与或比既定基准要好。关于Bayesian优化,我们展示NOMU高于所有其他基准。