Approximate Bayesian inference methods provide a powerful suite of tools for finding approximations to intractable posterior distributions. However, machine learning applications typically involve selecting actions, which -- in a Bayesian setting -- depend on the posterior distribution only via its contribution to expected utility. A growing body of work on loss-calibrated approximate inference methods has therefore sought to develop posterior approximations sensitive to the influence of the utility function. Here we introduce loss-calibrated expectation propagation (Loss-EP), a loss-calibrated variant of expectation propagation. This method resembles standard EP with an additional factor that "tilts" the posterior towards higher-utility decisions. We show applications to Gaussian process classification under binary utility functions with asymmetric penalties on False Negative and False Positive errors, and show how this asymmetry can have dramatic consequences on what information is "useful" to capture in an approximation.
翻译:近似贝叶斯推论法提供了一套强大的工具,用于寻找难以解决的后后院分布近似值。然而,机器学习应用通常涉及选择行动,在巴伊西亚环境下,这取决于后院分布,而后院分布只是通过其对预期效用的贡献。因此,关于损失校准近似推论方法的越来越多的工作试图开发对实用功能影响敏感的后院近近似值。我们在这里引入了损失校准预期传播(Loss-EP),即损失校准的预期传播变体。这种方法类似于标准的 EP, 外院为更高效用决定的附加因素。我们展示了在二元效用功能下加西亚进程分类的应用,对假否定和假否定错误进行不对称惩罚,并展示这种不对称对“有用”信息在近似效果中捕捉到的“有用”产生巨大后果。