Many-to-one maps are ubiquitous in machine learning, from the image recognition model that assigns a multitude of distinct images to the concept of "cat" to the time series forecasting model which assigns a range of distinct time-series to a single scalar regression value. While the primary use of such models is naturally to associate correct output to each input, in many problems it is also useful to be able to explore, understand, and sample from a model's fibers, which are the set of input values $x$ such that $f(x) = y$, for fixed $y$ in the output space. In this paper we show that popular generative architectures are ill-suited to such tasks. Motivated by this we introduce a novel generative architecture, a Bundle Network, based on the concept of a fiber bundle from (differential) topology. BundleNets exploit the idea of a local trivialization wherein a space can be locally decomposed into a product space that cleanly encodes the many-to-one nature of the map. By enforcing this decomposition in BundleNets and by utilizing state-of-the-art invertible components, investigating a network's fibers becomes natural.
翻译:许多到一个地图在机器学习中是无处不在的,从图像识别模型将许多不同的图像分配给“猫咪”概念的“猫咪”概念到时间序列预测模型,该模型将一系列不同的时间序列指派给一个单一的螺旋回归值。虽然这些模型的主要使用自然地将正确的输出与每个输入值联系起来,但在许多问题上,同样有用的是能够从模型纤维中探索、理解和抽样,这是一套输入值x$x(x)=美元,在输出空间中固定的美元为$f(x)= y美元。在本文中,我们表明流行的基因结构不适合于这种任务。我们为此引入了一个新的基因结构,一个基于(不同)表层的纤维捆绑概念的布德勒网络。BundleNets利用一个本地小化概念,即空间可以在当地分解成一个产品空间,可以将地图的多种性质编码为$x(x)= y美元。通过在Bundledlest Net中实施这种解剖式的组件,成为自然网络中的一种状态。