Random Fourier features (RFFs) provide a promising way for kernel learning in a spectral case. Current RFFs-based kernel learning methods usually work in a two-stage way. In the first-stage process, learning the optimal feature map is often formulated as a target alignment problem, which aims to align the learned kernel with the pre-defined target kernel (usually the ideal kernel). In the second-stage process, a linear learner is conducted with respect to the mapped random features. Nevertheless, the pre-defined kernel in target alignment is not necessarily optimal for the generalization of the linear learner. Instead, in this paper, we consider a one-stage process that incorporates the kernel learning and linear learner into a unifying framework. To be specific, a generative network via RFFs is devised to implicitly learn the kernel, followed by a linear classifier parameterized as a full-connected layer. Then the generative network and the classifier are jointly trained by solving the empirical risk minimization (ERM) problem to reach a one-stage solution. This end-to-end scheme naturally allows deeper features, in correspondence to a multi-layer structure, and shows superior generalization performance over the classical two-stage, RFFs-based methods in real-world classification tasks. Moreover, inspired by the randomized resampling mechanism of the proposed method, its enhanced adversarial robustness is investigated and experimentally verified.
翻译:Fourier 随机随机特性(RFFs) 为光谱案例的内核学习提供了很有希望的方法。 目前基于 RFFs 的内核学习方法通常以两阶段方式运作。 在第一阶段, 学习最佳的功能地图往往被设计成一个目标匹配问题, 目的是将所学的内核与预定的目标内核( 通常是理想的内核) 相匹配。 在第二阶段, 进行线性学习者与绘制的随机特性有关的线性学习者。 然而, 目标匹配中预先界定的内核不一定是将线性学习者普遍化的最佳方式。 相反, 在本文件中, 我们考虑一个将内核学习和线性学习者纳入统一框架的一阶段进程。 具体地说, 通过RFFs设计一个基因化网络, 隐含性地学习内核内核内核内核内核内核, 以完全相连的层为参数的线性分解器。 然后, 将标定的网和分级器联合培训, 解决实性风险最小化(ERM) 的问题, 达到一个以一阶段为基础的解决办法。 而此端- 上级的高级的机级的机级分类, 自然地在级级级级级级的高级的级化方法上显示更深级的高级的高级的级结构结构结构上更深级的分级化方法,, 。