We recast rational inattention as a Bayesian predictive decision problem in which the agent reports a predictive distribution and is evaluated by a proper local scoring rule. This yields a direct link to rate-distortion theory and shows that Shannon entropy emerges endogenously as the honest local utility for predictive refinement. Bernardo's characterization of proper local scoring rules together with Shannon's amalgamation invariance imply that the logarithmic score, and hence mutual information, is the unique information measure consistent with coherent prediction under refinement of the state space. Information costs, therefore, need not be assumed: they arise as expected predictive utility. Within this framework we establish a supported complete-class result: the optimal policies are Gibbs-Boltzmann channels, with the classical rational-inattention family recovered as a special case. Canonical models appear as geometric specializations of the same structure, including multinomial logit (and IIA) under entropic regularization, James-Stein shrinkage as optimal capacity allocation in Gaussian learning, and linear-quadratic-Gaussian control as the capacity-optimal Gaussian channel. Overall, the Bayesian predictive formulation reframes bounded rationality as an optimal design principle: finite information capacity is an endogenous solution to a well-posed predictive problem, and behaviors often attributed to cognitive frictions, soft choice, regularization, sparsity, and screening arise as rational responses to the geometry of predictive refinement.
翻译:我们将理性疏忽重新构建为一个贝叶斯预测决策问题,其中智能体报告一个预测分布,并通过一个适当的局部评分规则进行评估。这建立了与率失真理论的直接联系,并表明香农熵作为预测细化的诚实局部效用内生地出现。Bernardo对适当局部评分规则的刻画与香农的合并不变性共同意味着,对数评分规则(进而互信息)是在状态空间细化下与一致预测相容的唯一信息度量。因此,信息成本无需假设:它们作为期望预测效用自然产生。在此框架内,我们建立了一个支撑完备类结果:最优策略是吉布斯-玻尔兹曼信道,经典理性疏忽族作为特例被恢复。典型模型表现为同一结构的几何特化,包括熵正则化下的多项Logit(及IIA)、作为高斯学习中最优容量分配的James-Stein收缩,以及作为容量最优高斯信道的线性二次高斯控制。总体而言,贝叶斯预测框架将有限理性重新表述为一个最优设计原则:有限信息容量是一个适定预测问题的内生解,而通常归因于认知摩擦、软选择、正则化、稀疏性和筛选的行为,则表现为对预测细化几何结构的理性响应。