We propose a novel data-driven method to learn a mixture of multiple kernels with random features that is certifiabaly robust against adverserial inputs. Specifically, we consider a distributionally robust optimization of the kernel-target alignment with respect to the distribution of training samples over a distributional ball defined by the Kullback-Leibler (KL) divergence. The distributionally robust optimization problem can be recast as a min-max optimization whose objective function includes a log-sum term. We develop a mini-batch biased stochastic primal-dual proximal method to solve the min-max optimization. To debias the minibatch algorithm, we use the Gumbel perturbation technique to estimate the log-sum term. We establish theoretical guarantees for the performance of the proposed multiple kernel learning method. In particular, we prove the consistency, asymptotic normality, stochastic equicontinuity, and the minimax rate of the empirical estimators. In addition, based on the notion of Rademacher and Gaussian complexities, we establish distributionally robust generalization bounds that are tighter than previous known bounds. More specifically, we leverage matrix concentration inequalities to establish distributionally robust generalization bounds. We validate our kernel learning approach for classification with the kernel SVMs on synthetic dataset generated by sampling multvariate Gaussian distributions with differernt variance structures. We also apply our kernel learning approach to the MNIST data-set and evaluate its robustness to perturbation of input images under different adversarial models. More specifically, we examine the robustness of the proposed kernel model selection technique against FGSM, PGM, C\&W, and DDN adversarial perturbations, and compare its performance with alternative state-of-the-art multiple kernel learning paradigms.
翻译:我们建议一种新颖的数据驱动方法, 以学习混合多种内核的混合物, 其随机特性是精密的, 与负面输入相对, 我们考虑在 Kullback- Leiber (KL) 差异定义的分布球中, 将培训样本分布的内核- 目标匹配在分布式球中, 一种分布式强的优化问题重新表述成一个最小最大优化, 其目标功能包括一个日志和术语。 我们开发了一种小型的、 偏差偏差的原始和纯度的混合方法, 来解决微量的微量性硬度优化。 具体地, 我们考虑的是微量的内核算算算算算算算算法, 我们使用 Gumbel 的内核匹配法, 我们使用 Gumbel 来估算日志的分布法, 具体地, 我们用我们所了解的更稳健健的内压的内核分析法, 我们用Sloveyal 的内核分析法, 我们用更稳的内压的内压分析法, 我们用Slovealal 分配法, 我们用它的总分配法, 我们用前的内压的内压的内压的内压的内压分析法, 我们用它的总分配法, 我们用的内压的内压的内压的内压的内压的内压的内压的内压的内压的内压的内压的内压的内压, 我们用的内压 。