VALSE Webinar改版说明:
自2019年1月起,VALSE Webinar改革活动形式,由过去每次一个讲者的方式改为两种可能的形式:
1)Webinar专题研讨:每次活动有一个研讨主题,先邀请两位主题相关的优秀讲者做专题报告(每人30分钟),随后邀请额外的2~3位嘉宾共同就研讨主题进行讨论(30分钟)。
2)Webinar特邀报告:每次活动邀请一位资深专家主讲,就其在自己熟悉领域的科研工作进行系统深入的介绍,报告时间50分钟,主持人与主讲人互动10分钟,自由问答10分钟。
报告时间:2019年1月16日(星期三)晚上20:00(北京时间)
主题:打开深度学习的黑箱:可解释人工智能的方法与应用
主持人:苏航(清华大学)
报告嘉宾:周博磊(香港中文大学)
报告题目:Emergence of Interpretable Visual Concepts From Discriminative Networks to Generative Networks
报告嘉宾:张拳石(上海交通大学)
报告题目:Deep Visual Models with Interpretable Features and Modularized Structures
Panel议题:
可解释学习未来的发展方向是什么?
可解释对于深度学习的意义和研究目标是什么?
现在部分学者认为可解释在学习中不是必要的条件,大家怎么看?
可解释学习如何来进行科学的度量?解释结果的客观性与可靠性怎么评价?
可解释性的应用场景 有哪些?
可解释学习的最大挑战是什么?
Panel嘉宾:
周博磊(香港中文大学)、张拳石(上海交通大学)、刘世霞(清华大学)、刘日升(大连理工大学)
*欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题!
报告嘉宾:周博磊(香港中文大学)
报告时间:2019年1月16日(星期三)晚上20:00(北京时间)
报告题目:Emergence of Interpretable Visual Concepts From Discriminative Networks to Generative Networks
报告人简介:
Bolei Zhou is an Assistant Professor with the Information Engineering Department at the Chinese University of Hong Kong. He received his PhD in computer science at Massachusetts Institute of Technology. His research is in computer vision and machine learning, focusing on visual scene understanding and interpretable deep learning. He received the Facebook Fellowship, Microsoft Research Fellowship, MIT Greater China Fellowship, and his research was featured in media outlets such as TechCrunch, Quartz, and MIT News.
个人主页:
http://bzhou.ie.cuhk.edu.hk/
报告摘要:
I will talk about how the individual units in deep networks learn to disentangle the hidden meaningful concepts from both the tasks of classifying and synthesizing the images of natural scenes. A unifield tool of Network Dissection is developed to identify the emergent object concepts among the internal representations. I will further show some on-going progress to measure the degree of disentanglement in the neural coding using controlled experiments.
参考文献:
[1] Interpreting Deep Visual Representations via Network Dissection. Bolei Zhou*, David Bau*, Aude Oliva, Antonio Torralba. IEEE Transactions on Pattern Analysis and Machine Intelligence, June 2018.
[2] GAN Dissection: Visualizing and Understanding Generative Adversarial Networks David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba. ICLR'19.
[3] Revisiting the Importance of Individual Units in CNNs via Ablation. Bolei Zhou, Yiyou Sun, David Bau, Antonio Torralba arXiv:1806.02891, 2018.
报告嘉宾:张拳石(上海交通大学)
报告时间:2019年1月16日(星期三)晚上20:30(北京时间)
报告题目:Deep Visual Models with Interpretable Features and Modularized Structures
报告人简介:
Quanshi Zhang is an associate professor at the Shanghai Jiao Tong University. Before that, he received the BS degree in machine intelligence from the Peking University, China, in 2009 and M.S. and Ph.D. degrees in the center for spatial information science at the University of Tokyo, Japan, in 2011 and 2014, respectively. From 2014 to 2018, he was a postdoctoral researcher at the University of California, Los Angeles. His research interests range across computer vision and machine learning. Now, he is leading a group for explainable AI. The related topics include explainable neural networks, explanation of pre-trained neural networks, and unsupervised/weakly-supervised learning.
个人主页:
qszhang.com
报告摘要:
Although deep neural networks (DNNs) have achieved superior performance in different visual tasks, the knowledge representation inside a DNN is still considered as a black box. In this talk, I mainly introduce several core challenges of interpreting feature representations in DNNs and the corresponding solutions, which include 1. Learning a deep coupling of semantic graphs and DNNs 2. Learning disentangled and interpretable feature representations in intermediate layers of DNNs 3. Solving the conflict between the feature’s discrimination power and the feature’s interpretability 4. Learning a modular universal neural network with interpretable structures.
参考文献:
[1] Interpretable Convolutional Neural Networks,Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu,CVPR 2018.
[2] Interpreting CNN Knowledge via an Explanatory Graph,Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, Song-Chun Zhu,AAAI, 2018.
Panel嘉宾:刘世霞(清华大学)
嘉宾简介:
刘世霞博士是清华大学的长聘副教授。主要研究方向是可视分析、文本挖掘工作和信息可视化。担任 CCF A类会议 IEEE VIS(VAST) 2016和 2017的论文主席;担任 IEEE Transactions on Visualization and Computer Graphics 副主编( Associate editor-in-chief),编委( Associate editor);担任 IEEE Transactions on Big Data 编委( Associate editor);担任国际可视化会议 IEEE Pacific Visualization 2015的程序委员会主席。同时她是 Information Visualization期刊的编委,也是多个国际会议的程序委员会委员,例如 InfoVis、VAST 、CHI、KDD、 ACM Multimedia、ACM IUI 、SDM和 PacificVis等。担任IEEE VIS 2014 Meetup 共同主席( IEEE VIS组织委员会)和IEEE VIS 2015 Tutorial共同主席( IEEE VIS组织委员会)。
个人主页:
http://cgcad.thss.tsinghua.edu.cn/shixia/
Panel嘉宾:刘日升(大连理工大学)
嘉宾简介:
刘日升,大连理工大学计算数学博士,香港理工大学计算科学博士后。目前任大连理工大学国际信息与软件学院副教授,博士生导师,数字媒体技术系主任,几何计算与智能媒体技术研究所副所长。主要研究方向为机器学习、计算机视觉、优化方法等。近年发表论文90余篇,其中CCF推荐A类20余篇。
个人主页:
dutmedia.org
19-03期VALSE在线学术报告参与方式:
长按或扫描下方二维码,关注“VALSE”微信公众号(valse_wechat),后台回复“03期”,获取直播地址。
特别鸣谢本次Webinar主要组织者:
主办AC:苏航
协办AC:刘日升
责任AC:曹汛
活动参与方式:
1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互;
2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G群已满,除讲者等嘉宾外,只能申请加入VALSE I群,群号:480601274);
*注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。
3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备;
4、活动过程中,请不要说无关话语,以免影响活动正常进行;
5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题;
6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接;
7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。
8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]。
9、Webinar报告的视频(经讲者允许后),会更新在VALSE爱奇艺空间,请在爱奇艺关注Valse Webinar进行观看。