Bayes's rule deals with hard evidence, that is, we can calculate the probability of event $A$ occuring given that event $B$ has occurred. Soft evidence, on the other hand, involves a degree of uncertainty about whether event $B$ has actually occurred or not. Jeffrey's rule of conditioning provides a way to update beliefs in the case of soft evidence. We provide a framework to learn a probability distribution on the weights of a neural network trained using soft evidence by way of two simple algorithms for approximating Jeffrey conditionalization. We propose an experimental protocol for benchmarking these algorithms on empirical datasets and find that Jeffrey based methods are competitive or better in terms of accuracy yet show improvements in calibration metrics upwards of 20% in some cases, even when the data contains mislabeled points.
翻译:Bayes的规则涉及的是硬证据,也就是说,我们可以计算出发生事件的可能性$A美元,因为已经发生了事件$B$。软证据另一方面,软证据涉及事件是否实际发生的某种程度的不确定性。Jeffrey的调节规则为在软证据的情况下更新信仰提供了一种途径。我们提供了一个框架,通过两种简单的算法来了解使用软证据培训的神经网络的重量的概率分布,两种简单的算法来接近Jeffrey有条件化。我们提出了一个实验协议,将这些算法作为经验数据集的基准,并发现基于Jeffrey的方法在准确性方面是竞争性的或更好的,但在某些情况下,即使数据含有错误标签点,校准标准值也提高了20%。