The past decades have seen enormous improvements in computational inference based on statistical models, with continual enhancement in a wide range of computational tools, in competition. In Bayesian inference, first and foremost, MCMC techniques continue to evolve, moving from random walk proposals to Langevin drift, to Hamiltonian Monte Carlo, and so on, with both theoretical and algorithmic inputs opening wider access to practitioners. However, this impressive evolution in capacity is confronted by an even steeper increase in the complexity of the models and datasets to be addressed. The difficulties of modelling and then handling ever more complex datasets most likely call for a new type of tool for computational inference that dramatically reduce the dimension and size of the raw data while capturing its essential aspects. Approximate models and algorithms may thus be at the core of the next computational revolution.
翻译:在过去几十年里,基于统计模型的计算推论有了巨大的改进,在竞争中,一系列广泛的计算工具不断增强。在巴伊西亚的推论中,MCMC技术首先继续演变,从随机步行建议转向兰格温漂流,到汉密尔顿·蒙特卡洛,等等,理论和算法投入为从业人员提供了更广泛的准入。然而,能力方面的这种令人印象深刻的演进正面临着需要处理的模型和数据集的复杂性更加急剧地增加。建模和处理日益复杂的数据集的困难最有可能要求一种新型的计算推论工具,在捕捉原始数据的本质的同时,大幅减少原始数据的尺寸和大小。因此,近似模型和算法可能是下一个计算革命的核心。