Constrained learning is prevalent in many statistical tasks. Recent work proposes distance-to-set penalties to derive estimators under general constraints that can be specified as sets, but focuses on obtaining point estimates that do not come with corresponding measures of uncertainty. To remedy this, we approach distance-to-set regularization from a Bayesian lens. We consider a class of smooth distance-to-set priors, showing that they yield well-defined posteriors toward quantifying uncertainty for constrained learning problems. We discuss relationships and advantages over prior work on Bayesian constraint relaxation. Moreover, we prove that our approach is optimal in an information geometric-sense for finite penalty parameters $\rho$, and enjoys favorable statistical properties when $\rho\to\infty$. The method is designed to perform effectively within gradient-based MCMC samplers, as illustrated on a suite of simulated and real data applications.
翻译:受限制的学习在许多统计任务中很普遍。 最近的工作提议采用远程到定位的处罚办法,在一般限制下得出可以指定为一组的估测者,但重点是获得不附带相应的不确定度测量的点数估计。 为了解决这个问题,我们从巴伊西亚的镜头中采用远程到定位的正规化方法。我们考虑一种平滑的远程到定位的前题,表明它们产生定义明确的后遗症,以量化受限制的学习问题。我们讨论了与先前关于巴伊西亚限制放松的工作相比的关系和优势。此外,我们证明我们的方法最理想的是对限定的处罚参数进行几何感学,当值为$\rho\to\infty 时,我们享有有利的统计特性。该方法旨在在基于梯度的MCMC取样器中有效运行,如模拟和真实数据应用套件所显示的那样。