诸如深度卷积神经网络和递归神经网络之类的复杂机器学习模型最近在诸如对象/场景识别,图像字幕,视觉问题解答等广泛的计算机视觉应用中取得了长足进步。但它们通常被视为黑匣子。随着模型越来越深入地寻求更好的识别精度,变得越来越难以理解模型给出的预测及其原因。

本教程的目的是让计算机视觉社区广泛参与计算机视觉模型的可解释性和可解释性的主题。我们将回顾最近的进展,我们取得了可视化,解释和解释方法,以分析数据和模型在计算机视觉。本教程的主要主题是通过阐明机器学习可解释性的动机、典型方法、未来趋势和由此产生的可解释性的潜在工业应用,就机器学习可解释性这一新兴主题建立共识。

内容目录

  • 报告人:Bolei Zhou
  • 题目:Understanding Latent Semantics in GANs(基于GANs的潜在语义理解)
  • 报告人:Andrea Vedaldi
  • 题目:Understanding Models via Visualization and Attribution(基于可视化和属性模型的理解)
  • 报告人:Alexander Binder
  • 题目: Explaining Deep Learning for Identifying Structures and Biases in Computer Vision (基于可解释深度学习计算机视觉中的结构和偏差的识别)
  • 报告人:Alan L. Yuille
  • 题目: Deep Compositional Networks(深度组合网络)
成为VIP会员查看完整内容
iccv19_binder_slide.pdf
iccv19_zhou_slide.pdf
iccv19_yuille_slide.pdf
0
21

相关内容

Andrea Vedaldi ,牛津大学工程学院副教授,新学院的导师,也是牛津大学VGG小组的成员。他的研究专注于计算机视觉方法,以自动理解图像的内容,并将其应用于组织和搜索庞大的图像和视频库以及识别图像和视频中的面部和文字的应用。他还是VLFeat计算机视觉库的主要作者。

主题: Exploring and Exploiting Interpretable Semantics in GANs

摘要: 诸如深度卷积神经网络和递归神经网络之类的复杂机器学习模型最近在诸如对象/场景识别,图像字幕,视觉问题解答等广泛的计算机视觉应用中取得了长足进步。但它们通常被视为黑匣子。随着模型越来越深入地寻求更好的识别精度,变得越来越难以理解模型给出的预测及其原因。在此次课程中我们将回顾我们在可视化,解释和解释方法学方面的最新进展,以分析计算机视觉中的数据和模型。本教程的主要主题是通过阐明动机,典型方法,预期趋势以及由此产生的可解释性的潜在工业应用,来就新兴的机器学习可解释性主题达成共识。这是第一个lecture,由Bolei Zhou演讲的Exploring and Exploiting Interpretable Semantics in GANs。

成为VIP会员查看完整内容
0
9

简介: 机器学习可解释性的新方法以惊人的速度发布。与所有这些保持最新将是疯狂的,根本不可能。这就是为什么您不会在本书中找到最新颖,最有光泽的方法,而是找到机器学习可解释性的基本概念的原因。这些基础知识将为您做好使机器学​​习模型易于理解的准备。

可解释的是使用可解释的模型,例如线性模型或决策树。另一个选择是与模型无关的解释工具,该工具可以应用于任何监督的机器学习模型。与模型不可知的章节涵盖了诸如部分依赖图和置换特征重要性之类的方法。与模型无关的方法通过更改机器学习的输入来起作用建模并测量输出中的变化。

本书将教您如何使(监督的)机器学习模型可解释。这些章节包含一些数学公式,但是即使没有数学知识,您也应该能够理解这些方法背后的思想。本书不适用于尝试从头开始学习机器学习的人。如果您不熟悉机器学习,则有大量书籍和其他资源可用于学习基础知识。我推荐Hastie,Tibshirani和Friedman(2009)撰写的《统计学习的要素》一书和Andrewra Ng在Coursera³上开设的“机器学习”在线课程,着手进行机器学习。这本书和课程都是免费的!在本书的最后,对可解释机器学习的未来前景持乐观态度。

目录:

  • 前言
  • 第一章 引言
  • 第二章 解释性
  • 第三章 数据集
  • 第四章 解释模型
  • 第五章 模型不可知论方法
  • 第六章 基于实例的解释
  • 第七章 神经网络解释
  • 第八章 水晶球
  • 第九章 贡献
  • 第十章 引用本书

成为VIP会员查看完整内容
Interpretable-machine-learning.pdf
0
123

机器学习模型经常被批评是技术黑箱:只要输入数据就能得到正确答案,但却无法对其进行解释。Christoph Molnar在其新书中呼吁大家当前是时候停止将机器学习模型视为黑盒子,在学会运用模型的同时更应去学会分析模型如何做出决策,并给出了将黑盒变得具有可解释性的讨论。

成为VIP会员查看完整内容
0
108

Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related, and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the Predictive, Descriptive, Relevant (PDR) framework for discussing interpretations. The PDR framework provides three overarching desiderata for evaluation: predictive accuracy, descriptive accuracy and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post-hoc categories, with sub-groups including sparsity, modularity and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often under-appreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.

0
12
下载
预览

In structure learning, the output is generally a structure that is used as supervision information to achieve good performance. Considering the interpretation of deep learning models has raised extended attention these years, it will be beneficial if we can learn an interpretable structure from deep learning models. In this paper, we focus on Recurrent Neural Networks (RNNs) whose inner mechanism is still not clearly understood. We find that Finite State Automaton (FSA) that processes sequential data has more interpretable inner mechanism and can be learned from RNNs as the interpretable structure. We propose two methods to learn FSA from RNN based on two different clustering methods. We first give the graphical illustration of FSA for human beings to follow, which shows the interpretability. From the FSA's point of view, we then analyze how the performance of RNNs are affected by the number of gates, as well as the semantic meaning behind the transition of numerical hidden states. Our results suggest that RNNs with simple gated structure such as Minimal Gated Unit (MGU) is more desirable and the transitions in FSA leading to specific classification result are associated with corresponding words which are understandable by human beings.

0
18
下载
预览

This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, the interpretability is always the Achilles' heel of deep neural networks. At present, deep neural networks obtain high discrimination power at the cost of low interpretability of their black-box representations. We believe that high model interpretability may help people to break several bottlenecks of deep learning, e.g., learning from very few annotations, learning via human-computer communications at the semantic level, and semantically debugging network representations. We focus on convolutional neural networks (CNNs), and we revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. Finally, we discuss prospective trends in explainable artificial intelligence.

0
12
下载
预览

This paper presents a method of learning qualitatively interpretable models in object detection using popular two-stage region-based ConvNet detection systems (i.e., R-CNN). R-CNN consists of a region proposal network and a RoI (Region-of-Interest) prediction network.By interpretable models, we focus on weakly-supervised extractive rationale generation, that is learning to unfold latent discriminative part configurations of object instances automatically and simultaneously in detection without using any supervision for part configurations. We utilize a top-down hierarchical and compositional grammar model embedded in a directed acyclic AND-OR Graph (AOG) to explore and unfold the space of latent part configurations of RoIs. We propose an AOGParsing operator to substitute the RoIPooling operator widely used in R-CNN, so the proposed method is applicable to many state-of-the-art ConvNet based detection systems. The AOGParsing operator aims to harness both the explainable rigor of top-down hierarchical and compositional grammar models and the discriminative power of bottom-up deep neural networks through end-to-end training. In detection, a bounding box is interpreted by the best parse tree derived from the AOG on-the-fly, which is treated as the extractive rationale generated for interpreting detection. In learning, we propose a folding-unfolding method to train the AOG and ConvNet end-to-end. In experiments, we build on top of the R-FCN and test the proposed method on the PASCAL VOC 2007 and 2012 datasets with performance comparable to state-of-the-art methods.

0
4
下载
预览
小贴士
相关VIP内容
《可解释的机器学习-interpretable-ml》238页pdf
专知会员服务
123+阅读 · 2020年2月24日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
48+阅读 · 2019年10月9日
相关资讯
AI可解释性文献列表
专知
35+阅读 · 2019年10月7日
VALSE Webinar 特别专题之产学研共舞VALSE
VALSE
4+阅读 · 2019年9月19日
已删除
将门创投
3+阅读 · 2018年4月10日
【学界】从可视化到新模型:纵览深度学习的视觉可解释性
GAN生成式对抗网络
9+阅读 · 2018年3月4日
Andrew NG的新书《Machine Learning Yearning》
我爱机器学习
7+阅读 · 2016年12月7日
相关论文
Interpretable CNNs for Object Classification
Quanshi Zhang,Xin Wang,Ying Nian Wu,Huilin Zhou,Song-Chun Zhu
15+阅读 · 2020年3月12日
Bernhard Schölkopf
9+阅读 · 2019年11月24日
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman,Haekyu Park,Caleb Robinson,Duen Horng Chau
3+阅读 · 2019年9月2日
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch,Chandan Singh,Karl Kumbier,Reza Abbasi-Asl,Bin Yu
12+阅读 · 2019年1月14日
Bo-Jian Hou,Zhi-Hua Zhou
18+阅读 · 2018年10月25日
Michael Kampffmeyer,Yinbo Chen,Xiaodan Liang,Hao Wang,Yujia Zhang,Eric P. Xing
15+阅读 · 2018年5月31日
Quanshi Zhang,Ying Nian Wu,Song-Chun Zhu
14+阅读 · 2018年2月14日
Quanshi Zhang,Song-Chun Zhu
12+阅读 · 2018年2月7日
Tianfu Wu,Xilai Li,Xi Song,Wei Sun,Liang Dong,Bo Li
4+阅读 · 2017年11月14日
Tero Karras,Timo Aila,Samuli Laine,Jaakko Lehtinen
3+阅读 · 2017年11月3日
Top