The concept of dimension is essential to grasp the complexity of data. A naive approach to determine the dimension of a dataset is based on the number of attributes. More sophisticated methods derive a notion of intrinsic dimension (ID) that employs more complex feature functions, e.g., distances between data points. Yet, many of these approaches are based on empirical observations, cannot cope with the geometric character of contemporary datasets, and do lack an axiomatic foundation. A different approach was proposed by V. Pestov, who links the intrinsic dimension axiomatically to the mathematical concentration of measure phenomenon. First methods to compute this and related notions for ID were computationally intractable for large-scale real-world datasets. In the present work, we derive a computationally feasible method for determining said axiomatic ID functions. Moreover, we demonstrate how the geometric properties of complex data are accounted for in our modeling. In particular, we propose a principle way to incorporate neighborhood information, as in graph data, into the ID. This allows for new insights into common graph learning procedures, which we illustrate by experiments on the Open Graph Benchmark.
翻译:数据的复杂性要彻底理解,维数的概念是必不可少的。确定数据集的维数的一种简单方法是考虑特征的数量。更为复杂的方法则根据内在维数(ID)的概念进行,利用了更为复杂的特征函数,例如数据点之间的距离等。然而,许多这样的方法是基于经验观察的,不能处理当今数据集的几何特性,也缺乏公理基础。V. Pestov提出了一种不同的方法,将内在维数和数学测量集中现象公理联系起来。最初,计算ID和相关函数的方法在处理大规模真实数据集时是计算上不可行的。本文提出了一种计算上可行的方法来确定这种公理ID函数,并展示了如何考虑到复杂数据的几何特性。特别是,我们提出了一种将邻域信息(如图数据)纳入ID的原则方法。这允许我们对常见的图学习过程进行新的洞察,我们通过Open Graph Benchmark上的实验进行了说明。