Rather than refining individual candidate solutions for a general non-convex optimization problem, by analogy to evolution, we consider minimizing the average loss of a parametric distribution over hypotheses. In this setting, we prove that Fisher-Rao natural gradient descent (FR-NGD) optimally approximates the continuous-time replicator equation, which is an essential model for evolutionary dynamics, by minimizing the mean-squared error of relative fitness. We term this finding "conjugate natural selection" and demonstrate its utility by numerically solving an example non-convex optimization problem over a continuous strategy space. Next, by developing known connections between discrete-time replicator dynamics and Bayes's rule, we show that FR-NGD of the KL-divergence of modeled predictions from observations in continuous time provides the optimal approximation of continuous Bayesian inference. We use this result to demonstrate a novel method for estimating the parameters of a stochastic processes.
翻译:我们不是通过比照进化来改进一般非混凝土优化问题的个别候选解决办法,而是考虑尽量减少参数分布相对于假设的平均损失。 在这种背景下,我们证明Fisher-Rao自然梯度下降(FR-NGD)最佳地接近了连续时间复制方程,这是进化动态的基本模型,通过尽可能减少平均和半差的相对健康差错。我们把这一发现称为“协调自然选择”并通过从数字上解决一个在连续战略空间上的非混凝土优化问题来证明它的实用性。 其次,通过发展离散时间反射器动态和贝耶斯规则之间的已知联系,我们证明从连续时间观测中得出的模型预测的FR-NGD提供了连续波亚连续的推断的最佳近似值。我们用这一结果来展示一种新颖的方法来估计随机过程的参数。</s>