Recent works have derived neural networks with online correlation-based learning rules to perform \textit{kernel similarity matching}. These works applied existing linear similarity matching algorithms to nonlinear features generated with random Fourier methods. In this paper attempt to perform kernel similarity matching by directly learning the nonlinear features. Our algorithm proceeds by deriving and then minimizing an upper bound for the sum of squared errors between output and input kernel similarities. The construction of our upper bound leads to online correlation-based learning rules which can be implemented with a 1 layer recurrent neural network. In addition to generating high-dimensional linearly separable representations, we show that our upper bound naturally yields representations which are sparse and selective for specific input patterns. We compare the approximation quality of our method to neural random Fourier method and variants of the popular but non-biological "Nystr{\"o}m" method for approximating the kernel matrix. Our method appears to be comparable or better than randomly sampled Nystr{\"o}m methods when the outputs are relatively low dimensional (although still potentially higher dimensional than the inputs) but less faithful when the outputs are very high dimensional.
翻译:最近的作品衍生出基于在线相关学习规则的神经网络, 用于执行 \ textit{ 内核相似性匹配} 。 这些工程应用了现有的线性相似性算法, 以随机 Fourier 方法生成的非线性特征。 在本文中, 试图通过直接学习非线性特征来进行内核相似性匹配。 我们的算法通过得出并随后将输出和输入内核之间的正方位差和正方位差的总和的上限范围缩小到最小。 构建我们的上层连接导致在线基于相关学习规则, 可以通过一个层经常性神经网络实施。 除了生成高维线性线性直线性表达法外, 我们显示我们上约束的自然产量表是稀少的, 并有选择特定的输入模式。 我们比较了我们方法的近似质量, 以及流行但非生物“ ystr@o}m ” 内核矩阵的“ ystr o} ” 方法。 我们的方法似乎比随机抽样的 Nystr@ o} 方法要好于随机抽样的 Nystr 方法。 当输出是相对低的维的时, 当输出是相对的时, 相对的高度时, 而不是高度的产出是稳定的时, 。