Deep learning has had tremendous success at learning low-dimensional representations of high-dimensional data. This success would be impossible if there was no hidden low-dimensional structure in data of interest; this existence is posited by the manifold hypothesis, which states that the data lies on an unknown manifold of low intrinsic dimension. In this paper, we argue that this hypothesis does not properly capture the low-dimensional structure typically present in image data. Assuming that data lies on a single manifold implies intrinsic dimension is identical across the entire data space, and does not allow for subregions of this space to have a different number of factors of variation. To address this deficiency, we consider the union of manifolds hypothesis, which states that data lies on a disjoint union of manifolds of varying intrinsic dimensions. We empirically verify this hypothesis on commonly-used image datasets, finding that indeed, observed data lies on a disconnected set and that intrinsic dimension is not constant. We also provide insights into the implications of the union of manifolds hypothesis in deep learning, both supervised and unsupervised, showing that designing models with an inductive bias for this structure improves performance across classification and generative modelling tasks. Our code is available at https://github.com/layer6ai-labs/UoMH.
翻译:深层学习在学习高维数据的低维表现方面取得了巨大成功。 如果在相关数据中不存在隐藏的低维结构,这一成功将不可能成功。 如果在相关数据中不存在隐蔽的低维结构, 则这一成功将不可能成功; 多重假设就证明了这一存在, 该假设指出数据存在于一个未知的低内在层面。 在本文中, 我们争辩说, 这一假设没有恰当地捕捉到图像数据中通常存在的低维结构。 假设数据存在于一个单维结构上, 意味着整个数据空间的内在层面是相同的, 并且不允许这一空间的次区域有不同数量的变异因素。 为了解决这一缺陷, 我们考虑多种假设的结合, 指出数据存在于不同内在层面的多元体的脱节结合上。 我们用经验性地核实了这一假设, 发现观测的数据存在于一个互不相连的数据集上, 而内在维度是不一致的。 我们还深入了解了多元假设的结合在整个数据空间中的影响, 无论是监督的还是非超超度的, 表明设计带有描述性偏差的模型可以改善整个结构的分类和基因建模任务。 我们的代码可以在 http://squlabsal/ 。</s>