We present a continuous formulation of machine learning, as a problem in the calculus of variations and differential-integral equations, very much in the spirit of classical numerical analysis and statistical physics. We demonstrate that conventional machine learning models and algorithms, such as the random feature model, the shallow neural network model and the residual neural network model, can all be recovered as particular discretizations of different continuous formulations. We also present examples of new models, such as the flow-based random feature model, and new algorithms, such as the smoothed particle method and spectral method, that arise naturally from this continuous formulation. We discuss how the issues of generalization error and implicit regularization can be studied under this framework.
翻译:我们展示了机器学习的连续配方,作为变异和差别整体方程式的微积分中的一个问题,这在很大程度上是古典数字分析和统计物理学的精神。我们展示了常规机器学习模式和算法,如随机特征模型、浅神经网络模型和残余神经网络模型,都可以作为不同连续配方的特殊分解加以回收。我们还展示了新模型的例子,如流动随机特征模型,以及这种连续配方自然产生的光滑粒子法和光谱法等新算法。我们讨论了如何在这个框架内研究一般化错误和隐含的正规化问题。