Probabilistic circuits (PCs) are models that allow exact and tractable probabilistic inference. In contrast to neural networks, they are often assumed to be well-calibrated and robust to out-of-distribution (OOD) data. In this paper, we show that PCs are in fact not robust to OOD data, i.e., they don't know what they don't know. We then show how this challenge can be overcome by model uncertainty quantification. To this end, we propose tractable dropout inference (TDI), an inference procedure to estimate uncertainty by deriving an analytical solution to Monte Carlo dropout (MCD) through variance propagation. Unlike MCD in neural networks, which comes at the cost of multiple network evaluations, TDI provides tractable sampling-free uncertainty estimates in a single forward pass. TDI improves the robustness of PCs to distribution shift and OOD data, demonstrated through a series of experiments evaluating the classification confidence and uncertainty estimates on real-world data.
翻译:概率电路(PCs)是允许精确和可移植的概率概率推导的模型。 与神经网络不同, 与神经网络不同, 它们通常被假定是完全校准的, 并且对分配之外的( OOOD) 数据非常可靠。 在本文中, 我们显示个人计算机事实上对 OOD 数据并不可靠, 也就是说, 他们不知道自己不知道什么。 然后我们展示如何通过模型不确定性量化来克服这一挑战。 为此, 我们提出了可移植的推论(TDI ), 这是一种通过差异传播得出蒙特卡洛辍学分析解决方案( MCD ) 来估计不确定性的推论程序。 与神经网络中以多个网络评价为代价的 MCD 不同, TDI 在一个前方通道上提供可移植的无抽样不确定性估计。 TDI 通过一系列评估真实世界数据分类信任度和不确定性估计值的实验, TDI 提高了个人计算机对分布转移和OOD数据的可靠性。