Recent work has shown that sparse representations -- where only a small percentage of units are active -- can significantly reduce interference. Those works, however, relied on relatively complex regularization or meta-learning approaches, that have only been used offline in a pre-training phase. In this work, we pursue a direction that achieves sparsity by design, rather than by learning. Specifically, we design an activation function that produces sparse representations deterministically by construction, and so is more amenable to online training. The idea relies on the simple approach of binning, but overcomes the two key limitations of binning: zero gradients for the flat regions almost everywhere, and lost precision -- reduced discrimination -- due to coarse aggregation. We introduce a Fuzzy Tiling Activation (FTA) that provides non-negligible gradients and produces overlap between bins that improves discrimination. We first show that FTA is robust under covariate shift in a synthetic online supervised learning problem, where we can vary the level of correlation and drift. Then we move to the deep reinforcement learning setting and investigate both value-based and policy gradient algorithms that use neural networks with FTAs, in classic discrete control and Mujoco continuous control environments. We show that algorithms equipped with FTAs are able to learn a stable policy faster without needing target networks on most domains.
翻译:最近的工作表明,稀少的表述形式 -- -- 只有一小部分单位活跃在其中 -- -- 能够大大减少干扰。然而,这些方式依靠相对复杂的正规化或元化学习方法,这些方法只在培训前的脱线阶段使用。在这项工作中,我们追求一个通过设计而不是学习实现宽度的方向。具体地说,我们设计了一个启动功能,通过建设产生稀疏的表述,从而通过构建产生确定性,因此更适合在线培训。这个想法依赖于简单的宾入方法,但克服了宾入的两个主要限制:几乎无处不在的平板区域零梯度,以及由于粗糙的聚合而失去精度 -- -- 减少区分。我们引入了一个Fuzzy调调控方法(Freaty Acilation),它提供非偏差的梯度,并产生可改善歧视的垃圾桶之间的重叠。我们首先表明,在合成在线监管的学习问题中,在共通变式转换状态下,我们可以改变关联性和漂移的程度。然后我们转向深强化学习设置和调查基于价值和政策梯度的演算法,即使用内基网络,使用内置最稳定的内置的内置的内置式网络,在稳定的磁控制中,我们需要一种稳定的离式系统环境中,在稳定的内控。