VALSE Webinar 19-25期 深度解析对抗机器学习

2019 年 9 月 19 日 VALSE

报告时间:2019年9月25日(星期三)晚上20:30(北京时间)

主题:深度解析对抗机器学习 (Delving into Adversarial Machine Learning)

报告主持人:邹常青(华为加拿大诺亚方舟实验室)


报告嘉宾:谢慈航 (Johns Hopkins University)

报告题目:Feature Denoising for Improving Adversarial Robustness


报告嘉宾:宫博庆 (Google)

报告题目:Gaussian Attack by Learning the Distributions of Adversarial Examples


Panel议题:

1. 为什么现有的模型会存在对抗样本?

2. 主流的产生对抗样本的方法有哪些?它们主要的区别是什么?

3. 用对抗训练(adversarial training)来提升模型鲁棒性的优缺点各是什么?

4. 除了对抗训练,我们还可以从哪些其他的角度去提高模型的鲁棒性?

5. 在现实场景中,我们需要考虑对抗样本的威胁性吗?

6. 对抗样本与模型的可解释性有什么联系吗?


Panel嘉宾:

Bo Li (UIUC), Wei Shen (Johns Hopkins University)谢慈航 (Johns Hopkins University)宫博庆 (Google)


*欢迎大家在下方留言提出主题相关问题,主持人和panel嘉宾会从中选择若干热度高的问题加入panel议题!

 

报告嘉宾:谢慈航 (Johns Hopkins University)

报告时间:2019年9月25日(星期三)晚上20:30(北京时间)

报告题目:Feature Denoising for Improving Adversarial Robustness


报告人简介:

Cihang Xie is a Ph.D. student in Computer Science at Johns Hopkins University, advised by Prof. Alan Yuille. His research interest lies in computer vision and machine learning, especially on adversarial attacks and defenses, explainable machine learning and object detection. He did his summer internship at Facebook AI Research in 2018 and at Google in 2019.


个人主页:

https://cihangxie.github.io/


报告摘要:

Adversarial examples that can fool the state-of-the-art computer vision systems present challenges to convolutional networks and opportunities for understanding them. In this talk, I will present our recent work on defending against adversarial examples. Noticing that small adversarial perturbations on images lead to significant noise in the features space, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ~10%. Code is available at:

https://github.com/facebookresearch/ImageNet-Adversarial-Training.


参考文献:

[1] Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, Kaiming He, “Feature Denoising for Improving Adversarial Robustness”, CVPR 2019.

[2] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, “Mitigating Adversarial Effects Through Randomization”, ICLR 2018.

[3] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, Alan Yuille, “Adversarial Examples for Semantic Segmentation and Object Detection”, ICCV 2017.

 

报告嘉宾:宫博庆 (Google)

报告时间:2019年9月25日(星期三)晚上21:00(北京时间)

报告题目:Gaussian Attack by Learning the Distributions of Adversarial Examples


报告人简介:

Boqing Gong is a research scientist at Google, Seattle and a remote principal investigator at ICSI, Berkeley. His research in machine learning and computer vision focuses on modeling, algorithms, and visual recognition. Before joining Google in 2019, he worked in Tencent and was a tenure-track Assistant Professor at the University of Central Florida (UCF). He received an NSF CRII award in 2016 and an NSF BIGDATA award in 2017, both of which were the first of their kinds ever granted to UCF. He is/was a (senior) area chair of NeurIPS 2019, ICCV 2019, ICML 2019, AISTATS 2019, AAAI 2020, and WACV 2018--2020. He received his Ph.D. in 2015 at the University of Southern California, where the Viterbi Fellowship partially supported his work.


个人主页:

http://boqinggong.info


报告摘要:

Recent studies reveal that almost all input images lie sufficiently close to the classification boundaries of deep neural networks (DNNs) so that one can move the images from one side of a boundary to the other side by adding to them imperceptible perturbations. How to find such adversarial perturbations, however, depends on the networks ’idiosyncrasies as well as the associated defense techniques. Under some circumstances, one has to devise a new attack method in order to evaluate the robustness of a novel defense. Hence, there is a pressing need for the adversarial attack methods that are both universal and strong so as to accelerate the research on the robustness and security aspects of DNNs.


In this talk, I will present two of our recent works striving to possess the desired properties above. One is to learn a probability density distribution over a small region centered around the input such that a sample drawn from this distribution is likely an adversarial example, without the need of accessing the DNNs’ internal layers or weights. Our approach is universal as it can readily attack different DNNs by a single algorithm. It is also strong, circumventing six defenses completely, five for more than 90% test examples, and one for about half of the inputs --- all the defenses were published in 2019 and 2018. Our second work is to learn a physical camouflage to hide a vehicle from being detected by the DNN-based object detectors. I will conclude the talk with a brief overview of my other interests: domain adaptation, video summarization, and label-efficient learning.


参考文献:

[1] Li Yandong, Lijun Li, Liqiang Wang, Tong Zhang, Boqing Gong. "NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks." ICML 2019.


[2] Zhang Yang, Hassan Foroosh, Philip David, Boqing Gong. "CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild." ICLR 2019.

 

Panel嘉宾:Bo Li(UIUC)


嘉宾简介:

Bo Li is an assistant professor in Computer Science at the University of Illinois at Urbana-Champaign. She received the Symantec Research Labs Graduate Fellowship in 2015. Her research focuses on machine learning, security, privacy, game theory, social networks, and adversarial deep learning. She has designed several robust learning algorithms, a scalable framework for achieving robustness for a range of learning methods, and privacy preserving data publishing systems. She is interested in both theoretical analysis of general machine learning models and developing practical systems.


个人主页:

https://aisecure.github.io/

 

Panel嘉宾:Wei Shen (Johns Hopkins University)


嘉宾简介:

Wei Shen is a Research Assistant Professor in Computer Science at Johns Hopkins University. He received his B.S. and Ph.D. degree both in Electronics and Information Engineering from the Huazhong University of Science and Technology (HUST), Wuhan, China, in 2007 and in 2012. From April 2011 to November 2011, he worked in Microsoft Research Asia as an intern. In 2012, he joined the School of Communication and Information Engineering, Shanghai University and served as an assistant and associate professor until Oct 2018. In 2016, he started his visit at the Department of Computer Science, Johns Hopkins University, hosted by Prof. Alan Yuille. He is currently an Assistant Research Professor at the Department of Computer Science, Johns Hopkins University. He has over 40 peer-reviewed publications in machine learning and computer vision related areas, including IEEE Trans. PAMI, IEEE Trans. Image Processing, NIPS, ICML, ICCV, CVPR and ECCV.


个人主页:

http://wei-shen.weebly.com/

 
 

主持人:邹常青 (Huawei Technologies Canada Noah Ark’s Lab) 


主持人简介:

Changqing Zou is a Principal Research Scientist at Huawei Technologies Canada Noah Ark’s Lab. Prior to joining Huawei Canada, he was a Research Assistant Professor at University of Maryland, College Park. His research focuses on the algorithms that allow computers to understand various data including 2-D/3-D shapes and images, as well as exploring various applications based on those understandings. Zou received his Ph.D. in early 2015 from the Multimedia Lab of Shenzhen Institutes of Advanced Technology, CAS. While completing his doctorate, Zou worked at MSRA and the City University of Hong Kong as a visiting researcher from 2013 to 2014. He was also a postdoctoral researcher at Simon Fraser University in Canada from 2015 to 2017. 


个人主页:

https://changqingzou.weebly.com/


19-25期VALSE在线学术报告参与方式:


长按或扫描下方二维码,关注“VALSE”微信公众号(valse_wechat),后台回复“25期”,获取直播地址。



特别鸣谢本次Webinar主要组织者:

主办AC:邹常青 (华为加拿大诺亚方舟实验室)

协办AC:欧阳万里(悉尼大学)

责任AC:严骏驰(上海交通大学)


VALSE Webinar改版说明:

自2019年1月起,VALSE Webinar改革活动形式,由过去每次一个讲者的方式改为两种可能的形式:

1)Webinar专题研讨:每次活动有一个研讨主题,先邀请两位主题相关的优秀讲者做专题报告(每人30分钟),随后邀请额外的2~3位嘉宾共同就研讨主题进行讨论(30分钟)。

2)Webinar特邀报告:每次活动邀请一位资深专家主讲,就其在自己熟悉领域的科研工作进行系统深入的介绍,报告时间50分钟,主持人与主讲人互动10分钟,自由问答10分钟。


活动参与方式:

1、VALSE Webinar活动依托在线直播平台进行,活动时讲者会上传PPT或共享屏幕,听众可以看到Slides,听到讲者的语音,并通过聊天功能与讲者交互;

2、为参加活动,请关注VALSE微信公众号:valse_wechat 或加入VALSE QQ群(目前A、B、C、D、E、F、G、H、I、J群已满,除讲者等嘉宾外,只能申请加入VALSE K群,群号:691615571);

*注:申请加入VALSE QQ群时需验证姓名、单位和身份,缺一不可。入群后,请实名,姓名身份单位。身份:学校及科研单位人员T;企业研发I;博士D;硕士M。

3、在活动开始前5分钟左右,讲者会开启直播,听众点击直播链接即可参加活动,支持安装Windows系统的电脑、MAC电脑、手机等设备;

4、活动过程中,请不要说无关话语,以免影响活动正常进行;

5、活动过程中,如出现听不到或看不到视频等问题,建议退出再重新进入,一般都能解决问题;

6、建议务必在速度较快的网络上参加活动,优先采用有线网络连接;

7、VALSE微信公众号会在每周四发布下一周Webinar报告的通知及直播链接。

8、Webinar报告的PPT(经讲者允许后),会在VALSE官网每期报告通知的最下方更新[slides]

9、Webinar报告的视频(经讲者允许后),会更新在VALSE爱奇艺空间,请在爱奇艺关注Valse Webinar进行观看。

登录查看更多
4

相关内容

对抗机器学习(Adverserial Machine Learning)作为机器学习研究中的安全细分方向,在一定程度上保证模型的安全性
3D目标检测进展综述
专知会员服务
191+阅读 · 2020年4月24日
【干货51页PPT】深度学习理论理解探索
专知会员服务
61+阅读 · 2019年12月24日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
59+阅读 · 2019年10月17日
VALSE Webinar 特别专题之产学研共舞VALSE
VALSE
7+阅读 · 2019年9月19日
VALSE Webinar 19-24期 去雨去雾专题
VALSE
23+阅读 · 2019年9月12日
VALSE Webinar 19-22期 医学影像处理与分析
VALSE
9+阅读 · 2019年8月30日
VALSE Webinar 19-16期 云深可知处:视觉SLAM
VALSE
12+阅读 · 2019年7月4日
VALSE Webinar 19-09期 3D视觉与深度学习
VALSE
5+阅读 · 2019年4月12日
VALSE Webinar 19-07期 迁移学习与领域适配
VALSE
5+阅读 · 2019年3月28日
VALSE Webinar 19-05期 自动机器学习 AutoML
VALSE
8+阅读 · 2019年2月28日
VALSE Webinar 19-04期 弱监督图像理解专题
VALSE
9+阅读 · 2019年2月21日
VALSE Webinar 19-01期 元学习专题研讨
VALSE
13+阅读 · 2018年12月27日
Adversarial Transfer Learning
Arxiv
12+阅读 · 2018年12月6日
Arxiv
4+阅读 · 2018年9月25日
VIP会员
相关资讯
VALSE Webinar 特别专题之产学研共舞VALSE
VALSE
7+阅读 · 2019年9月19日
VALSE Webinar 19-24期 去雨去雾专题
VALSE
23+阅读 · 2019年9月12日
VALSE Webinar 19-22期 医学影像处理与分析
VALSE
9+阅读 · 2019年8月30日
VALSE Webinar 19-16期 云深可知处:视觉SLAM
VALSE
12+阅读 · 2019年7月4日
VALSE Webinar 19-09期 3D视觉与深度学习
VALSE
5+阅读 · 2019年4月12日
VALSE Webinar 19-07期 迁移学习与领域适配
VALSE
5+阅读 · 2019年3月28日
VALSE Webinar 19-05期 自动机器学习 AutoML
VALSE
8+阅读 · 2019年2月28日
VALSE Webinar 19-04期 弱监督图像理解专题
VALSE
9+阅读 · 2019年2月21日
VALSE Webinar 19-01期 元学习专题研讨
VALSE
13+阅读 · 2018年12月27日
Top
微信扫码咨询专知VIP会员