Morgan and McIver's weakest pre-expectation framework is one of the most well-established methods for deductive verification of probabilistic programs. Roughly, the idea is to generalize binary state assertions to real-valued expectations, which can measure expected values of probabilistic program quantities. While loop-free programs can be analyzed by mechanically transforming expectations, verifying loops usually requires finding an invariant expectation, a difficult task. We propose a new view of invariant expectation synthesis as a regression problem: given an input state, predict the average value of the post-expectation in the output distribution. Guided by this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior work on probabilistic invariant inference, our approach can learn piecewise continuous invariants without relying on template expectations, and also works with black-box access to the program. We also develop a data-driven approach to learn sub-invariants from data, which can be used to upper- or lower-bound expected values. We implement our approaches and demonstrate their effectiveness on a variety of benchmarks from the probabilistic programming literature.
翻译:Morgan和McIver最弱的预想前框架是计算概率程序推算性核查最完善的方法之一。 大致上, 其想法是将二进制状态的主张概括为实际价值的预期值, 从而测量概率性程序数量的预期值。 虽然可以通过机械改变预期值来分析无环程序, 核查循环通常需要找到一个不变化的预期值, 这是一项困难的任务。 我们提出一种新的观点, 将不变化的预期合成视为一个回归问题: 给一个输入状态, 预测产出分布中预测后预期值的平均值。 以这种观点为指导, 我们开发了第一个数据驱动的不变化性合成方法, 用于概率性方案。 与先前关于概率性不变化性预测的工作不同, 我们的方法可以在不依赖模板预期值的情况下学习零散的不变化性持续变量, 并且使用黑箱访问程序。 我们还开发一种数据驱动方法, 从数据中学习次变量, 可用于上下限或下限的预期值 。 我们执行我们的方法, 并展示了从概率性文献中的各种基准的有效性 。