人工智能(Artificial Intelligence, AI )是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。 人工智能是计算机科学的一个分支。

VIP内容

报告研究团队由布朗大学计算机科学教授Michael L. Littman教授担任团队主席,来自学术界和行业研究实验室的17名成员组成,成员包括计算机科学、工程学、法律学、政治学、政策学、社会学和经济学学者。

与5年前的第一份报告明确侧重于AI在北美城市的影响相比,这次报告的重点是更深入地探讨AI对全球人类和社会的影响。

专业性。报告由该领域核心多学科研究人员组成的研究小组编写——这些专家将创建人工智能算法或研究其对社会的影响作为他们的主要专业活动,并且已经这样做了很多年。作者牢牢扎根于人工智能领域,并提供「内部」视角。

长期性。这是一项长期的纵向研究,报告计划每五年发布一次,持续100年。2021年9月的这份报告是计划中的系列研究中的第二份报告,第一份报告于2016年9月1日发表后,在大众媒体上广为报道,并全球多项人工智能课程中被广泛使用。

本报告的受众主要有四类:

对于一般公众来说,它对人工智能现状及其潜力做出了无障碍、科学和技术准确的描述。

对于工业界来说,报告指出了相关技术和法律和道德上的挑战,可能有助于指导资源配置。

对地方、国家和国际政府来说,报告有助于更好地规划人工智能技术的综合治理。

最后,对于人工智能研究人员和研究机构而言,报告有助于确定研究的优先级,并考虑人工智能研究及其应用提出的经济、道德和法律问题。

成为VIP会员查看完整内容
0
25

热门内容

The quest of `can machines think' and `can machines do what human do' are quests that drive the development of artificial intelligence. Although recent artificial intelligence succeeds in many data intensive applications, it still lacks the ability of learning from limited exemplars and fast generalizing to new tasks. To tackle this problem, one has to turn to machine learning, which supports the scientific study of artificial intelligence. Particularly, a machine learning problem called Few-Shot Learning (FSL) targets at this case. It can rapidly generalize to new tasks of limited supervised experience by turning to prior knowledge, which mimics human's ability to acquire knowledge from few examples through generalization and analogy. It has been seen as a test-bed for real artificial intelligence, a way to reduce laborious data gathering and computationally costly training, and antidote for rare cases learning. With extensive works on FSL emerging, we give a comprehensive survey for it. We first give the formal definition for FSL. Then we point out the core issues of FSL, which turns the problem from "how to solve FSL" to "how to deal with the core issues". Accordingly, existing works from the birth of FSL to the most recent published ones are categorized in a unified taxonomy, with thorough discussion of the pros and cons for different categories. Finally, we envision possible future directions for FSL in terms of problem setup, techniques, applications and theory, hoping to provide insights to both beginners and experienced researchers.

0
321
下载
预览

最新论文

Generative models trained using Differential Privacy (DP) are increasingly used to produce and share synthetic data in a privacy-friendly manner. In this paper, we set out to analyze the impact of DP on these models vis-a-vis underrepresented classes and subgroups of data. We do so from two angles: 1) the size of classes and subgroups in the synthetic data, and 2) classification accuracy on them. We also evaluate the effect of various levels of imbalance and privacy budgets. Our experiments, conducted using three state-of-the-art DP models (PrivBayes, DP-WGAN, and PATE-GAN), show that DP results in opposite size distributions in the generated synthetic data. More precisely, it affects the gap between the majority and minority classes and subgroups, either reducing it (a "Robin Hood" effect) or increasing it ("Matthew" effect). However, both of these size shifts lead to similar disparate impacts on a classifier's accuracy, affecting disproportionately more the underrepresented subparts of the data. As a result, we call for caution when analyzing or training a model on synthetic data, or risk treating different subpopulations unevenly, which might also lead to unreliable conclusions.

0
0
下载
预览
Top