主动学习是机器学习(更普遍的说是人工智能)的一个子领域,在统计学领域也叫查询学习、最优实验设计。“学习模块”和“选择策略”是主动学习算法的2个基本且重要的模块。 主动学习是“一种学习方法,在这种方法中,学生会主动或体验性地参与学习过程,并且根据学生的参与程度,有不同程度的主动学习。” (Bonwell&Eison 1991)Bonwell&Eison(1991) 指出:“学生除了被动地听课以外,还从事其他活动。” 在高等教育研究协会(ASHE)的一份报告中,作者讨论了各种促进主动学习的方法。他们引用了一些文献,这些文献表明学生不仅要做听,还必须做更多的事情才能学习。他们必须阅读,写作,讨论并参与解决问题。此过程涉及三个学习领域,即知识,技能和态度(KSA)。这种学习行为分类法可以被认为是“学习过程的目标”。特别是,学生必须从事诸如分析,综合和评估之类的高级思维任务。

VIP内容

主动学习是一种有监督的机器学习协议,其中学习算法从大量未标记数据中序列地请求选定数据点的标签。这与被动学习形成了对比,被动学习是随机获取有标记的数据。主动学习的目标是产生一个高度精确的分类器,理想情况下使用的标签要比被动学习达到同样目的所需的随机标记数据的数量少。这本书描述了我们对主动学习的理论益处的理解的最新进展,以及对设计有效的主动学习算法的启示。文章的大部分内容都集中在一种特殊的方法上,即基于不同意见的主动学习,到目前为止,这种方法已经积累了大量的文献。它还从文献中简要地考察了几种可供选择的方法。重点是关于一些一般算法的性能的定理,包括适当的严格证明。然而,本文的目的是教学,集中于说明基本思想的结果,而不是获得最强或最普遍的已知定理。目标受众包括机器学习和统计学领域的研究人员和高级研究生,他们有兴趣更深入地了解主动学习理论最近和正在进行的发展。

成为VIP会员查看完整内容
1
48

最新内容

Many active learning and search approaches are intractable for large-scale industrial settings with billions of unlabeled examples. Existing approaches search globally for the optimal examples to label, scaling linearly or even quadratically with the unlabeled data. In this paper, we improve the computational efficiency of active learning and search methods by restricting the candidate pool for labeling to the nearest neighbors of the currently labeled set instead of scanning over all of the unlabeled data. We evaluate several selection strategies in this setting on three large-scale computer vision datasets: ImageNet, OpenImages, and a de-identified and aggregated dataset of 10 billion images provided by a large internet company. Our approach achieved similar mean average precision and recall as the traditional global approach while reducing the computational cost of selection by up to three orders of magnitude, thus enabling web-scale active learning.

0
0
下载
预览

最新论文

Many active learning and search approaches are intractable for large-scale industrial settings with billions of unlabeled examples. Existing approaches search globally for the optimal examples to label, scaling linearly or even quadratically with the unlabeled data. In this paper, we improve the computational efficiency of active learning and search methods by restricting the candidate pool for labeling to the nearest neighbors of the currently labeled set instead of scanning over all of the unlabeled data. We evaluate several selection strategies in this setting on three large-scale computer vision datasets: ImageNet, OpenImages, and a de-identified and aggregated dataset of 10 billion images provided by a large internet company. Our approach achieved similar mean average precision and recall as the traditional global approach while reducing the computational cost of selection by up to three orders of magnitude, thus enabling web-scale active learning.

0
0
下载
预览
父主题
Top