人工智能(Artificial Intelligence, AI )是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。 人工智能是计算机科学的一个分支。

VIP内容

理论结果表明,为了学习用于表示高层次的抽象(例如视觉、语言以及其他AI级别的任务)的复杂函数,我们需要深度结构。深度结构的组成包括了多层次的非线性操作,比如具有许多隐含层的神经网络,或者重用了许多子公式的复杂命题公式。搜索深度结构的参数空间是一件很困难的任务,但是近提出的诸如用于深度信念网络等的学习算法,对于探索这类问题取得了显着的成功,在某些领域达到了新的水平。

本书讨论深度学习算法的方法和原理,尤其是那些被充分用作基石的单层模型的非监督学习算法例如受限玻尔兹曼机(RBM),它用于构建深度信念网络等深度模型。

尤舒亚•本吉奥(Yoshua Bengio),加拿大蒙特利尔大学计算机科学与运筹学系教授,领导蒙特利尔学习算法研究所。他是深度学习历史上的代表性人物之一,发表了200余篇论文和两部专着,是加拿大论文引用率高的计算机科学家之一。

http://edlab-www.cs.umass.edu/cs697l/readings/Learning%20Deep%20Architectures%20for%20AI.pdf

成为VIP会员查看完整内容
0
22

热门内容

The quest of `can machines think' and `can machines do what human do' are quests that drive the development of artificial intelligence. Although recent artificial intelligence succeeds in many data intensive applications, it still lacks the ability of learning from limited exemplars and fast generalizing to new tasks. To tackle this problem, one has to turn to machine learning, which supports the scientific study of artificial intelligence. Particularly, a machine learning problem called Few-Shot Learning (FSL) targets at this case. It can rapidly generalize to new tasks of limited supervised experience by turning to prior knowledge, which mimics human's ability to acquire knowledge from few examples through generalization and analogy. It has been seen as a test-bed for real artificial intelligence, a way to reduce laborious data gathering and computationally costly training, and antidote for rare cases learning. With extensive works on FSL emerging, we give a comprehensive survey for it. We first give the formal definition for FSL. Then we point out the core issues of FSL, which turns the problem from "how to solve FSL" to "how to deal with the core issues". Accordingly, existing works from the birth of FSL to the most recent published ones are categorized in a unified taxonomy, with thorough discussion of the pros and cons for different categories. Finally, we envision possible future directions for FSL in terms of problem setup, techniques, applications and theory, hoping to provide insights to both beginners and experienced researchers.

0
311
下载
预览

最新论文

Despite remarkable success in a variety of applications, it is well-known that deep learning can fail catastrophically when presented with out-of-distribution data. Toward addressing this challenge, we consider the domain generalization problem, wherein predictors are trained using data drawn from a family of related training domains and then evaluated on a distinct and unseen test domain. We show that under a natural model of data generation and a concomitant invariance condition, the domain generalization problem is equivalent to an infinite-dimensional constrained statistical learning problem; this problem forms the basis of our approach, which we call Model-Based Domain Generalization. Due to the inherent challenges in solving constrained optimization problems in deep learning, we exploit nonconvex duality theory to develop unconstrained relaxations of this statistical problem with tight bounds on the duality gap. Based on this theoretical motivation, we propose a novel domain generalization algorithm with convergence guarantees. In our experiments, we report improvements of up to 30 percentage points over state-of-the-art domain generalization baselines on several benchmarks including ColoredMNIST, Camelyon17-WILDS, FMoW-WILDS, and PACS.

0
1
下载
预览
Top