“机器学习是近20多年兴起的一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。机器学习理论主要是设计和分析一些让 可以自动“ 学习”的算法。机器学习算法是一类从数据中自动分析获得规律,并利用规律对未知数据进行预测的算法。因为学习算法中涉及了大量的统计学理论,机器学习与统计推断学联系尤为密切,也被称为统计学习理论。算法设计方面,机器学习理论关注可以实现的,行之有效的学习算法。很多 推论问题属于 无程序可循难度,所以部分的机器学习研究是开发容易处理的近似算法。” ——中文维基百科

知识荟萃

机器学习课程 专知搜集

  1. cs229 机器学习 吴恩达
  2. 台大 李宏毅 机器学习
  3. 爱丁堡大学 机器学习与模式识别
  4. Courses on machine learning
  5. CSC2535 -- Spring 2013 Advanced Machine Learning
  6. Stanford CME 323: Distributed Algorithms and Optimization
  7. University at Buffalo CSE574: Machine Learning and Probabilistic Graphical Models Course
  8. Stanford CS229: Machine Learning Autumn 2015
  9. Stanford / Winter 2014-2015 CS229T/STATS231: Statistical Learning Theory
  10. CMU Fall 2015 10-715: Advanced Introduction to Machine Learning
  11. 2015 Machine Learning Summer School: Convex Optimization Short Course
  12. STA 4273H [Winter 2015]: Large Scale Machine Learning
  13. University of Oxford: Machine Learning: 2014-2015
  14. Computer Science 294: Practical Machine Learning [Fall 2009]
  1. Statistics, Probability and Machine Learning Short Course
  2. Statistical Learning
  3. Machine learning courses online
  4. Build Intelligent Applications: Master machine learning fundamentals in five hands-on courses
  5. Machine Learning
  6. Princeton Computer Science 598D: Overcoming Intractability in Machine Learning
  7. Princeton Computer Science 511: Theoretical Machine Learning
  8. MACHINE LEARNING FOR MUSICIANS AND ARTISTS
  9. CMSC 726: Machine Learning
  10. MIT: 9.520: Statistical Learning Theory and Applications, Fall 2015
  11. CMU: Machine Learning: 10-701/15-781, Spring 2011
  12. NLA 2015 course material
  13. CS 189/289A: Introduction to Machine Learning[with videos]
  14. An Introduction to Statistical Machine Learning Spring 2014 [for ACM Class]
  15. CS 159: Advanced Topics in Machine Learning [Spring 2016]
  16. Advanced Statistical Computing [Vanderbilt University]
  17. Stanford CS229: Machine Learning Spring 2016
  18. Machine Learning: 2015-2016
  19. CS273a: Introduction to Machine Learning
  20. Machine Learning CS-433
  21. Machine Learning Introduction: A machine learning course using Python, Jupyter Notebooks, and OpenML
  22. Advanced Introduction to Machine Learning
  23. STA 4273H [Winter 2015]: Large Scale Machine Learning
  24. Statistical Learning Theory and Applications [MIT]
  25. Regularization Methods for Machine Learning
  1. Convex Optimization: Spring 2015
  2. CMU: Probabilistic Graphical Models [10-708, Spring 2014]
  3. Advanced Optimization and Randomized Methods
  4. Machine Learning for Robotics and Computer Vision
  5. Statistical Machine Learning
  6. Probabilistic Graphical Models [10-708, Spring 2016]

数学基础

Calculus

  1. Khan Academy Calculus [https://www.khanacademy.org/math/calculus-home]

Linear Algebra

  1. Khan Academy Linear Algebra
  2. Linear Algebra MIT 目前最好的线性代数课程

Statistics and probability

  1. edx Introduction to Statistics [https://www.edx.org/course/introduction-statistics-descriptive-uc-berkeleyx-stat2-1x]
  2. edx Probability [https://www.edx.org/course/introduction-statistics-probability-uc-berkeleyx-stat2-2x]
  3. An exploration of Random Processes for Engineers [http://www.ifp.illinois.edu/~hajek/Papers/randomprocDec11.pdf]
  4. Information Theory [http://colah.github.io/posts/2015-09-Visual-Information/]

VIP内容

在生产中使用端到端示例构建和部署机器学习和深度学习模型。

本书以机器学习模型部署过程及其相关挑战为重点。接下来,它涵盖了使用不同的web框架(如Flask和Streamlit)构建和部署机器学习模型的过程。关于Docker的那一章将介绍如何打包和包含机器学习模型。本书还说明了如何使用Kubernetes建立和训练大规模的机器学习和深度学习模型。

对于那些希望通过采用预构建模型并将其部署到生产环境中来提高机器学习水平的人来说,这本书是一个很好的起点。它还为那些想要超越Jupyter ,在云环境下进行大规模训练的人提供了指导。书中提供的所有代码都以Python脚本的形式提供给您,您可以尝试这些示例并以有趣的方式扩展它们。

你将学会 :

使用Kubernetes大规模构建、训练和部署机器学习模型

将任何类型的机器学习模型容器化,并使用Docker在任何平台上运行

使用Flask和Streamlit框架部署机器学习和深度学习模型

成为VIP会员查看完整内容
0
30

最新论文

Despite remarkable success in a variety of applications, it is well-known that deep learning can fail catastrophically when presented with out-of-distribution data. Toward addressing this challenge, we consider the domain generalization problem, wherein predictors are trained using data drawn from a family of related training domains and then evaluated on a distinct and unseen test domain. We show that under a natural model of data generation and a concomitant invariance condition, the domain generalization problem is equivalent to an infinite-dimensional constrained statistical learning problem; this problem forms the basis of our approach, which we call Model-Based Domain Generalization. Due to the inherent challenges in solving constrained optimization problems in deep learning, we exploit nonconvex duality theory to develop unconstrained relaxations of this statistical problem with tight bounds on the duality gap. Based on this theoretical motivation, we propose a novel domain generalization algorithm with convergence guarantees. In our experiments, we report improvements of up to 30 percentage points over state-of-the-art domain generalization baselines on several benchmarks including ColoredMNIST, Camelyon17-WILDS, FMoW-WILDS, and PACS.

0
1
下载
预览
Top