We present a theoretical framework of probabilistic learning derived by Maximum Probability (MP) Theorem shown in the current paper. In this probabilistic framework, a model is defined as an event in the probability space, and a model or the associated event - either the true underlying model or the parameterized model - have a quantified probability measure. This quantification of a model's probability measure is derived by the MP Theorem, in which we have shown that an event's probability measure has an upper-bound given its conditional distribution on an arbitrary random variable. Through this alternative framework, the notion of model parameters is encompassed in the definition of the model or the associated event. Therefore, this framework deviates from the conventional approach of assuming a prior on the model parameters. Instead, the regularizing effects of assuming prior over parameters is seen through maximizing probabilities of models or according to information theory, minimizing the information content of a model. The probability of a model in our framework is invariant to reparameterization and is solely dependent on the model's likelihood function. Also, rather than maximizing the posterior in a conventional Bayesian setting, the objective function in our alternative framework is defined as the probability of set operations (e.g. intersection) on the event of the true underlying model and the event of the model at hand. Our theoretical framework, as a derivation of MP theorem, adds clarity to probabilistic learning through solidifying the definition of probabilistic models, quantifying their probabilities, and providing a visual understanding of objective functions.
翻译:我们提出了一个从最大概率(MP)理论中得出的概率学习理论理论的理论框架。 在这个概率框架中, 模型被定义为概率空间中的一个事件, 模型或相关事件( 真正的基础模型或参数化模型) 具有量化的概率测量。 模型概率测量的量化由 MP 理论得出, 我们在该理论中显示, 事件概率测量具有上限, 因为它的有条件分布以任意随机变量为条件。 通过这个替代框架, 模型参数的概念包含在模型或相关事件的定义中。 因此, 这个框架偏离了假设模型参数之前的常规方法, 而模型或参数参数参数的参数的模型或相关模型的参数, 具有量化效果的正规化效果是通过模型的概率或者根据信息理论, 最大限度地减少模型的信息内容。 我们框架中的模型概率是不可逆的, 完全取决于模型的可能性函数。 此外, 模型参数的概念概念概念概念概念概念包含模型的精确度或相关事件的定义, 而不是将模型的后半数值最大化, 以常规的精确度定义作为我们学习的精确度框架 。 目标函数 以我们的精确性 提供我们学习的概率 的精确性 。