“机器学习是近20多年兴起的一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。机器学习理论主要是设计和分析一些让 可以自动“ 学习”的算法。机器学习算法是一类从数据中自动分析获得规律,并利用规律对未知数据进行预测的算法。因为学习算法中涉及了大量的统计学理论,机器学习与统计推断学联系尤为密切,也被称为统计学习理论。算法设计方面,机器学习理论关注可以实现的,行之有效的学习算法。很多 推论问题属于 无程序可循难度,所以部分的机器学习研究是开发容易处理的近似算法。” ——中文维基百科

知识荟萃

机器学习课程 专知搜集

  1. cs229 机器学习 吴恩达
  2. 台大 李宏毅 机器学习
  3. 爱丁堡大学 机器学习与模式识别
  4. Courses on machine learning
  5. CSC2535 -- Spring 2013 Advanced Machine Learning
  6. Stanford CME 323: Distributed Algorithms and Optimization
  7. University at Buffalo CSE574: Machine Learning and Probabilistic Graphical Models Course
  8. Stanford CS229: Machine Learning Autumn 2015
  9. Stanford / Winter 2014-2015 CS229T/STATS231: Statistical Learning Theory
  10. CMU Fall 2015 10-715: Advanced Introduction to Machine Learning
  11. 2015 Machine Learning Summer School: Convex Optimization Short Course
  12. STA 4273H [Winter 2015]: Large Scale Machine Learning
  13. University of Oxford: Machine Learning: 2014-2015
  14. Computer Science 294: Practical Machine Learning [Fall 2009]
  1. Statistics, Probability and Machine Learning Short Course
  2. Statistical Learning
  3. Machine learning courses online
  4. Build Intelligent Applications: Master machine learning fundamentals in five hands-on courses
  5. Machine Learning
  6. Princeton Computer Science 598D: Overcoming Intractability in Machine Learning
  7. Princeton Computer Science 511: Theoretical Machine Learning
  8. MACHINE LEARNING FOR MUSICIANS AND ARTISTS
  9. CMSC 726: Machine Learning
  10. MIT: 9.520: Statistical Learning Theory and Applications, Fall 2015
  11. CMU: Machine Learning: 10-701/15-781, Spring 2011
  12. NLA 2015 course material
  13. CS 189/289A: Introduction to Machine Learning[with videos]
  14. An Introduction to Statistical Machine Learning Spring 2014 [for ACM Class]
  15. CS 159: Advanced Topics in Machine Learning [Spring 2016]
  16. Advanced Statistical Computing [Vanderbilt University]
  17. Stanford CS229: Machine Learning Spring 2016
  18. Machine Learning: 2015-2016
  19. CS273a: Introduction to Machine Learning
  20. Machine Learning CS-433
  21. Machine Learning Introduction: A machine learning course using Python, Jupyter Notebooks, and OpenML
  22. Advanced Introduction to Machine Learning
  23. STA 4273H [Winter 2015]: Large Scale Machine Learning
  24. Statistical Learning Theory and Applications [MIT]
  25. Regularization Methods for Machine Learning
  1. Convex Optimization: Spring 2015
  2. CMU: Probabilistic Graphical Models [10-708, Spring 2014]
  3. Advanced Optimization and Randomized Methods
  4. Machine Learning for Robotics and Computer Vision
  5. Statistical Machine Learning
  6. Probabilistic Graphical Models [10-708, Spring 2016]

数学基础

Calculus

  1. Khan Academy Calculus [https://www.khanacademy.org/math/calculus-home]

Linear Algebra

  1. Khan Academy Linear Algebra
  2. Linear Algebra MIT 目前最好的线性代数课程

Statistics and probability

  1. edx Introduction to Statistics [https://www.edx.org/course/introduction-statistics-descriptive-uc-berkeleyx-stat2-1x]
  2. edx Probability [https://www.edx.org/course/introduction-statistics-probability-uc-berkeleyx-stat2-2x]
  3. An exploration of Random Processes for Engineers [http://www.ifp.illinois.edu/~hajek/Papers/randomprocDec11.pdf]
  4. Information Theory [http://colah.github.io/posts/2015-09-Visual-Information/]

VIP内容

如今,企业创建的机器学习(ML)模型中,有一半以上都没有投入生产。主要是面临技术上的操作挑战和障碍,还有组织上的。不管怎样,最基本的是,不在生产中的模型不能提供业务影响。

这本书介绍了MLOps的关键概念,帮助数据科学家和应用工程师不仅可以操作ML模型来驱动真正的业务变化,而且还可以随着时间的推移维护和改进这些模型。通过基于世界各地众多MLOps应用的经验教训,九位机器学习专家对模型生命周期的五个步骤——构建、预生产、部署、监控和治理——提供了深刻见解,揭示了如何将稳健的MLOps过程贯穿始终。

https://www.oreilly.com/library/view/introducing-mlops/9781492083283/

这本书帮助你:

通过减少整个ML管道和工作流程的冲突,实现数据科学价值 通过再训练、定期调整和完全重构来改进ML模型,以确保长期的准确性 设计MLOps的生命周期,使组织风险最小化,模型是公正的、公平的和可解释的 为管道部署和更复杂、不那么标准化的外部业务系统操作ML模型

成为VIP会员查看完整内容
0
35

最新内容

We consider the problem of sparse nonnegative matrix factorization (NMF) with archetypal regularization. The goal is to represent a collection of data points as nonnegative linear combinations of a few nonnegative sparse factors with appealing geometric properties, arising from the use of archetypal regularization. We generalize the notion of robustness studied in Javadi and Montanari (2019) (without sparsity) to the notions of (a) strong robustness that implies each estimated archetype is close to the underlying archetypes and (b) weak robustness that implies there exists at least one recovered archetype that is close to the underlying archetypes. Our theoretical results on robustness guarantees hold under minimal assumptions on the underlying data, and applies to settings where the underlying archetypes need not be sparse. We propose new algorithms for our optimization problem; and present numerical experiments on synthetic and real datasets that shed further insights into our proposed framework and theoretical developments.

0
0
下载
预览

最新论文

We consider the problem of sparse nonnegative matrix factorization (NMF) with archetypal regularization. The goal is to represent a collection of data points as nonnegative linear combinations of a few nonnegative sparse factors with appealing geometric properties, arising from the use of archetypal regularization. We generalize the notion of robustness studied in Javadi and Montanari (2019) (without sparsity) to the notions of (a) strong robustness that implies each estimated archetype is close to the underlying archetypes and (b) weak robustness that implies there exists at least one recovered archetype that is close to the underlying archetypes. Our theoretical results on robustness guarantees hold under minimal assumptions on the underlying data, and applies to settings where the underlying archetypes need not be sparse. We propose new algorithms for our optimization problem; and present numerical experiments on synthetic and real datasets that shed further insights into our proposed framework and theoretical developments.

0
0
下载
预览
Top