Recently, Nystr\"{o}m method has proved its prominence empirically and theoretically in speeding up the training of kernel machines while retaining satisfactory performances and accuracy. So far, there are several different approaches proposed to exploit Nystr\"{o}m method in scaling up kernel machines. However, there is no comparative study over these approaches, and they were individually analyzed for specific types of kernel machines. Therefore, it remains a question that the philosophy of which approach is more promising when it extends to other kernel machines. In this work, motivated by the column inclusion property of Gram matrices, we develop a new approach with a clear geometric interpretation for running Nystr\"{o}m-based kernel machines. We show that the other two well-studied approaches can be equivalently transformed to be our proposed one. Consequently, analysis established for the proposed approach also works for these two. Particularly, our proposed approach makes it possible to develop approximation errors in a general setting. Besides, our analysis also manifests the relations among the aforementioned two approaches and another naive one. First, the analytical forms of the corresponding approximate solutions are only at odds with one term. Second, the naive approach can be implemented efficiently by sharing the same training procedure with others. These analytical results lead to the conjecture that the naive approach can provide more accurate approximate solutions than the other two sophisticated approaches. Since our analysis also offers ways for computing the accuracy of these approximate solutions, we run experiments with classification tasks to confirm our conjecture.


翻译:最近, Nystr\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\在经验上和理论上在加速内核机培训的同时保持令人满意的性能和准确性,在加速内核机培训方面,Nystr\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

0
下载
关闭预览

相关内容

专知会员服务
51+阅读 · 2020年12月14日
强化学习最新教程,17页pdf
专知会员服务
182+阅读 · 2019年10月11日
[综述]深度学习下的场景文本检测与识别
专知会员服务
78+阅读 · 2019年10月10日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
105+阅读 · 2019年10月9日
Transferring Knowledge across Learning Processes
CreateAMind
29+阅读 · 2019年5月18日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
18+阅读 · 2018年12月24日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
【论文】变分推断(Variational inference)的总结
机器学习研究会
39+阅读 · 2017年11月16日
Arxiv
0+阅读 · 2021年11月9日
VIP会员
相关VIP内容
专知会员服务
51+阅读 · 2020年12月14日
强化学习最新教程,17页pdf
专知会员服务
182+阅读 · 2019年10月11日
[综述]深度学习下的场景文本检测与识别
专知会员服务
78+阅读 · 2019年10月10日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
105+阅读 · 2019年10月9日
相关资讯
Transferring Knowledge across Learning Processes
CreateAMind
29+阅读 · 2019年5月18日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
18+阅读 · 2018年12月24日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
【论文】变分推断(Variational inference)的总结
机器学习研究会
39+阅读 · 2017年11月16日
Top
微信扫码咨询专知VIP会员