Variational inference (VI) is a cornerstone of modern Bayesian learning, enabling approximate inference in complex models that would otherwise be intractable. However, its formulation depends on expectations and divergences defined through high-dimensional integrals, often rendering analytical treatment impossible and necessitating heavy reliance on approximate learning and inference techniques. Possibility theory, an imprecise probability framework, allows to directly model epistemic uncertainty instead of leveraging subjective probabilities. While this framework provides robustness and interpretability under sparse or imprecise information, adapting VI to the possibilistic setting requires rethinking core concepts such as entropy and divergence, which presuppose additivity. In this work, we develop a principled formulation of possibilistic variational inference and apply it to a special class of exponential-family functions, highlighting parallels with their probabilistic counterparts and revealing the distinctive mathematical structures of possibility theory.
翻译:变分推理(VI)是现代贝叶斯学习的基石,能够在原本难以处理的复杂模型中实现近似推理。然而,其公式依赖于通过高维积分定义的期望和散度,这通常使得解析处理无法实现,并严重依赖于近似学习与推理技术。可能性理论作为一种不精确概率框架,允许直接建模认知不确定性,而非依赖主观概率。尽管该框架在稀疏或不精确信息下具有鲁棒性和可解释性,但将VI适配至可能性设定需要重新思考熵与散度等核心概念,这些概念均以可加性为前提。在本研究中,我们建立了可能性变分推理的原理性公式,并将其应用于一类特殊的指数族函数,揭示了其与概率对应模型的相似性,并展现了可能性理论独特的数学结构。