Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.
翻译:在采用受监督模型的域中,往往存在特定任务的限制,例如以前对地面真实功能的专家知识,或像安全和公平一样的偏差。我们采用新的概率框架来推理这些限制因素,并制定一个事先框架,使我们能够有效地将其纳入巴伊西亚神经网络(BNNs),包括一个可以对任务进行摊销的变体。由此产生的产出控制培训的BNN(OC-BNN)完全符合巴伊西亚不确定性量化框架,并且可以接受黑盒推断。与在无法解释的参数空间中典型的 BNN 推断不同,OC-BNNs扩大了可以纳入的功能知识范围,特别是对于没有机器学习专门知识的模型用户。我们展示了实际世界数据集中的OC-BNS的功效,涵盖多个领域,如保健、刑事司法和信用得分。