本课程(以及本教材)的目标是为最广泛使用的学习架构展示学习理论的旧成果和新成果。本课程面向的是理论导向型的学生,以及那些想要获得基本数学理解的学生,这些学生在机器学习和相关领域中使用了大量的学习方法,如计算机视觉或自然语言处理。为了证明从第一性原理得出的许多结果,将作出特别的努力,同时使阐明尽可能简单。这将自然导致选择的关键结果,在简单但相关的实例中展示学习理论的重要概念。在没有证明的情况下,也将给出一些一般的结果。当然,第一性原理的概念是主观的,我将假定有良好的线性代数、概率论和微分的知识。
https://www.di.ens.fr/~fbach/learning_theory_class/index.html
目录内容:
无线数据学习 Learning with infinite data (population setting) -Decision theory (loss, risk, optimal predictors) -Decomposition of excess risk into approximation and estimation errors -No free lunch theorems -Basic notions of concentration inequalities (MacDiarmid, Hoeffding, Bernstein) |
线性最小二乘回归 Liner Least-squares regression -Guarantees in the fixed design settings (simple in closed-form) -Ridge regression: dimension independent bounds -Guarantees in the random design settings -Lower bound of performance |
经验风险最小化 Empirical risk minimization -Convexification of the risk -Risk decomposition -Estimation error: finite number of hypotheses and covering numbers -Rademacher complexity -Penalized problems |
机器学习的优化 Optimization for machine learning -Gradient descent -Stochastic gradient descent -Generalization bounds through stochastic gradient descent |
局部平均技术 Local averaging techniques -Partition estimators -Nadaraya-Watson estimators -K-nearest-neighbors -Universal consistency |
核方法 Kernel methods -Kernels and representer theorems -Algorithms -Analysis of well-specified models -Sharp analysis of ridge regression -Universal consistency |
模型选择 Model selection -L0 penalty -L1 penalty -High-dimensional estimation |
神经网络 Neural networks -Single hidden layer neural networks - Estimation error - Approximation properties and universality |
特别主题 Special topics -Generalization/optimization properties of infinitely wide neural networks -Double descent |
专知便捷查看
便捷下载,请关注专知公众号(点击上方蓝色专知关注)
后台回复“L229” 就可以获取《INRIA最新「机器学习理论」新书,229页pdf原理性阐述机器学习》专知下载链接