We explore a method of statistical estimation called Maximum Entropy on the Mean (MEM) which is based on an information-driven criterion that quantifies the compliance of a given point with a reference prior probability measure. At the core of this approach lies the MEM function which is a partial minimization of the Kullback-Leibler divergence over a linear constraint. In many cases, it is known that this function admits a simpler representation (known as the Cram\'er rate function). Via the connection to exponential families of probability distributions, we study general conditions under which this representation holds. We then address how the associated MEM estimator gives rise to a wide class of MEM-based regularized linear models for solving inverse problems. Finally, we propose an algorithmic framework to solve these problems efficiently based on the Bregman proximal gradient method, alongside proximal operators for commonly used reference distributions. The article is complemented by a software package for experimentation and exploration of the MEM approach in applications.
翻译:我们探索一种统计估计方法,称为平均值最大值(MEM),该方法基于一种信息驱动标准,该标准用参考先概率度量某一点的遵守情况。该方法的核心是MEM函数,该函数在线性限制方面部分最小化了Kullback-Lebeller的差异。在很多情况下,众所周知,该函数允许较简单的表示(称为Cram\'er比率函数) 。通过连接概率分布的指数式家庭,我们研究该表示所持的一般条件。然后我们探讨相关的MEM估计值如何产生大量基于MEM的正规化直线模型,用以解决反向问题。最后,我们建议了一个算法框架,以布雷格曼准渐变法为基础,与常用参考分布的准运算符操作器一道,有效解决这些问题。该文章由用于试验和探索MEM方法的软件包加以补充。