Deep learning has had tremendous success at learning low-dimensional representations of high-dimensional data. This success would be impossible if there was no hidden low-dimensional structure in data of interest; this existence is posited by the manifold hypothesis, which states that the data lies on an unknown manifold of low intrinsic dimension. In this paper, we argue that this hypothesis does not properly capture the low-dimensional structure typically present in image data. Assuming that data lies on a single manifold implies intrinsic dimension is identical across the entire data space, and does not allow for subregions of this space to have a different number of factors of variation. To address this deficiency, we put forth the union of manifolds hypothesis, which states that data lies on a disjoint union of manifolds of varying intrinsic dimensions. We empirically verify this hypothesis on commonly-used image datasets, finding that indeed, observed data lies on a disconnected set and that intrinsic dimension is not constant. We also provide insights into the implications the union of manifolds hypothesis has for deep learning, both supervised and unsupervised, showing that designing models with an inductive bias for this structure improves performance across classification and generative modelling tasks.
翻译:深层学习在学习高维数据的低维表现方面取得了巨大成功。 如果在相关数据中不存在隐藏的低维结构,这一成功将是不可能的。 这种成功将不可能实现; 多重假设就证明了这一存在, 该假设指出数据存在于一个未知的低内在层面的方方面面上。 在本文中, 我们争辩说, 这一假设没有恰当地捕捉到图像数据中通常存在的低维结构。 假设数据存在于一个单一的方方面面, 意味着整个数据空间的内在层面是相同的, 并且不允许这一空间的次区域有不同数量的变异因素。 为了解决这一缺陷, 我们提出了多个方方面面的假设, 指出数据存在于不同内在层面的方方面面的脱节结合上。 我们通过实验性地核实了这个关于常用图像数据集的假设, 发现事实上, 观察到的数据存在于一个互不相连的数据集上, 而内在层面是不一致的。 我们还提供了对多元假设结合对整个数据空间的深层学习的影响的洞察和不可信影响, 表明设计模型时会给这一结构带来暗示性偏差, 改进了整个分类和基因建模任务。