鲁棒机器学习相关文献集

2019 年 8 月 18 日 专知

【导读】鲁棒机器学习”(robust machine learning)寻求即使在其预设被违反时仍然能良好工作的机器学习算法。机器学习中最大的假设是,训练数据是独立分布的且是未来系统输入的典型范例。研究人员正在探索使机器学习系统在这种假设不成立时更加稳健(鲁棒)的方法。本文P2333整理了关于鲁棒机器学习的论文。

地址:

https://github.com/P2333/Papers-of-Robust-ML

General Defenses

  • Barrage of Random Transforms for Adversarially Robust Defense (CVPR 2019) 
    This paper applies a set of different random transformations as an off-the-shelf defense.

  • Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Accuracy 
    This paper introduces the mixup method into adversarial training to improve the model performance on clean images.

  • Robust Decision Trees Against Adversarial Examples (ICML 2019) 
    A method to enhance the robustness of tree models, including GBDTs.

  • Adversarial Training for Free! 
    A fast method for adversarial training, which shares the back-propogation gradients of updating weighs and crafting adversarial examples.

  • Improving Adversarial Robustness via Promoting Ensemble Diversity (ICML 2019) 
    Previous work constructs ensemble defenses by individually enhancing each memeber and then directly average the predictions. In this work, the authors propose the adaptive diversity promoting (ADP) to further improve the robustness by promoting the ensemble diveristy, as an orthogonal methods compared to other defenses.

  • Ensemble Adversarial Training- Attacks and Defenses (ICLR 2018) 
    Ensemble adversarial training use sevel pre-trained models, and in each training batch, they randomly select one of the currently trained model or pre-trained models to craft adversarial examples.

  • Max-Mahalanobis Linear Discriminant Analysis Networks (ICML 2018) 
    This is one of our work. We explicitly model the feature distribution as a Max-Mahalanobis distribution (MMD), which has max margin among classes and can lead to guaranteed robustness.

  • A Spectral View of Adversarially Robust Features (NeurIPS 2018) 
    Given the entire dataset X, use the eigenvectors of spectral graph as robust features. [Appendix]

  • Deep Defense: Training DNNs with Improved Adversarial Robustness (NeurIPS 2018) 
    They follow the linear assumption in DeepFool method. DeepDefense pushes decision boundary away from those correctly classified, and pull decision boundary closer to those misclassified.

  • Feature Denoising for Improving Adversarial Robustness (CVPR 2019) 
    This paper applies non-local neural network and large-scale adversarial training with 128 GPUs (with training trick in 'Accurate, large minibatch SGD: Training ImageNet in 1 hour'), which shows large improvement than previous SOTA trained with 50 GPUs.


Adversarial Detection

  • Towards Robust Detection of Adversarial Examples (NeurIPS 2018) 
    This is one of our work. We train the networks with reverse cross-entropy (RCE), which can map normal features to low-dimensional manifolds, and then detectors can better separate between adversarial examples and normal ones.

  • A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks (NeurIPS 2018) 
    Fit a GDA on learned features, and use Mahalanobis distance as the detection metric.

  • Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks (NeurIPS 2018) 
    They fit a GMM on learned features, and use the probability as the detection metric.


Verification

  • Automated Verification of Neural Networks: Advances, Challenges and Perspectives 
    This paper provides an overview of main verification methods, and introduces previous work on combining automated verification with machine learning. They also give some insights on future tendency of the combination between these two domains.

  • Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope (ICML 2018) 
    By robust optimization (via a linear program), they can get a point-wise bound of robustness, where no adversarial example exists in the bound. Experiments are done on MNIST.

  • Scaling Provable Adversarial Defenses (NeurIPS 2018) 
    They add three tricks to improve the scalability of previously proposed method. Experiments are done on MNIST and CIFAR-10.


Theoretical Analysis

  • Adversarial Examples Are a Natural Consequence of Test Error in Noise (ICML 2019) 
    This paper connects the relation between the general corruption robustness and the adversarial robustness, and recommand the adversarial defenses methods to be also tested on general-purpose noises.

  • Adversarial Examples Are Not Bugs, They Are Features 
    They claim that adversarial examples can be directly attributed to the presence of non-robust features, which are highly predictive but locally quite sensitive.

  • On Evaluating Adversarial Robustness 
    Some analyses on how to correctly evaluate the robustness of adversarial defenses.

  • Robustness of Classifiers:from Adversarial to Random Noise (NeurIPS 2016)

  • Adversarial Vulnerability for Any Classifier (NeurIPS 2018) 
    Uniform upper bound of robustness for any classifier on the data sampled from smooth genertive models.

  • Adversarially Robust Generalization Requires More Data (NeurIPS 2018) 
    This paper show that robust generalization requires much more sample complexity compared to standard generlization on two simple data distributional models.


Empirical Analysis

  • Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong 
    This paper tests some ensemble of existing detection-based defenses, and claim that these ensemble defenses could still be evade by white-box attacks.





-END-

专 · 知


专知,专业可信的人工智能知识分发,让认知协作更快更好!欢迎登录www.zhuanzhi.ai,注册登录专知,获取更多AI知识资料!

欢迎微信扫一扫加入专知人工智能知识星球群,获取最新AI专业干货知识教程视频资料和与专家交流咨询

请加专知小助手微信(扫一扫如下二维码添加),加入专知人工智能主题群,咨询技术商务合作~

专知《深度学习:算法到实战》课程全部完成!560+位同学在学习,现在报名,限时优惠!网易云课堂人工智能畅销榜首位!

点击“阅读原文”,了解报名专知《深度学习:算法到实战》课程

登录查看更多
8

相关内容

专知会员服务
158+阅读 · 2020年1月16日
专知会员服务
112+阅读 · 2019年12月24日
机器学习入门的经验与建议
专知会员服务
90+阅读 · 2019年10月10日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
98+阅读 · 2019年10月9日
ICML2019:Google和Facebook在推进哪些方向?
中国人工智能学会
5+阅读 · 2019年6月13日
19篇ICML2019论文摘录选读!
专知
28+阅读 · 2019年4月28日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
无监督元学习表示学习
CreateAMind
25+阅读 · 2019年1月4日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
177页《鲁棒机器学习》教程【下载】
机器学习算法与Python学习
8+阅读 · 2018年11月15日
人工智能 | 国际会议/SCI期刊约稿信息9条
Call4Papers
3+阅读 · 2018年1月12日
【推荐】决策树/随机森林深入解析
机器学习研究会
5+阅读 · 2017年9月21日
Arxiv
38+阅读 · 2020年3月10日
On Feature Normalization and Data Augmentation
Arxiv
14+阅读 · 2020年2月25日
Arxiv
7+阅读 · 2018年6月8日
Arxiv
7+阅读 · 2018年5月23日
Arxiv
5+阅读 · 2018年4月22日
VIP会员
相关资讯
ICML2019:Google和Facebook在推进哪些方向?
中国人工智能学会
5+阅读 · 2019年6月13日
19篇ICML2019论文摘录选读!
专知
28+阅读 · 2019年4月28日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
无监督元学习表示学习
CreateAMind
25+阅读 · 2019年1月4日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
177页《鲁棒机器学习》教程【下载】
机器学习算法与Python学习
8+阅读 · 2018年11月15日
人工智能 | 国际会议/SCI期刊约稿信息9条
Call4Papers
3+阅读 · 2018年1月12日
【推荐】决策树/随机森林深入解析
机器学习研究会
5+阅读 · 2017年9月21日
相关论文
Arxiv
38+阅读 · 2020年3月10日
On Feature Normalization and Data Augmentation
Arxiv
14+阅读 · 2020年2月25日
Arxiv
7+阅读 · 2018年6月8日
Arxiv
7+阅读 · 2018年5月23日
Arxiv
5+阅读 · 2018年4月22日
Top
微信扫码咨询专知VIP会员