Autoencoders are data-specific compression algorithms learned automatically from examples. The predominant approach has been to construct single large global models that cover the domain. However, training and evaluating models of increasing size comes at the price of additional time and computational cost. Conditional computation, sparsity, and model pruning techniques can reduce these costs while maintaining performance. Learning classifier systems (LCS) are a framework for adaptively subdividing input spaces into an ensemble of simpler local approximations that together cover the domain. LCS perform conditional computation through the use of a population of individual gating/guarding components, each associated with a local approximation. This article explores the use of an LCS to adaptively decompose the input domain into a collection of small autoencoders where local solutions of different complexity may emerge. In addition to benefits in convergence time and computational cost, it is shown possible to reduce code size as well as the resulting decoder computational cost when compared with the global model equivalent.
翻译:自动计算器是自动从实例中自动学习的数据专用压缩算法。主要的方法是构建覆盖域的单一大型全球模型。然而,规模扩大的培训和评估模型是以额外时间和计算成本的价格提供的。有条件计算、宽度和模型修剪技术可以降低成本,同时保持性能。学习分类系统(LCS)是适应性地分解输入空间的框架,将其纳入一个共同覆盖域的更简单的地方近似组合。 LCS通过使用单个格子/保护组件群进行有条件的计算,每个部件都与本地近似相关。本文章探索使用 LCS 将输入域从适应性上拆解成一个小型自动计算器的集合,其中可能出现不同复杂的本地解决方案。除了在聚合时间和计算成本方面的好处外,还显示可以降低代码大小,并在与全球模型等同时导致的解码计算成本。