We reformulate unsupervised dimension reduction problem (UDR) in the language of tempered distributions, i.e. as a problem of approximating an empirical probability density function by another tempered distribution, supported in a $k$-dimensional subspace. We show that this task is connected with another classical problem of data science -- the sufficient dimension reduction problem (SDR). In fact, an algorithm for the first problem induces an algorithm for the second and vice versa. In order to reduce an optimization problem over distributions to an optimization problem over ordinary functions we introduce a nonnegative penalty function that ``forces'' the support of the model distribution to be $k$-dimensional. Then we present an algorithm for the minimization of the penalized objective, based on the infinite-dimensional low-rank optimization, which we call the alternating scheme. Also, we design an efficient approximate algorithm for a special case of the problem, where the distance between the empirical distribution and the model distribution is measured by Maximum Mean Discrepancy defined by a Mercer kernel of a certain type. We test our methods on four examples (three UDR and one SDR) using synthetic data and standard datasets.
翻译:我们用温和分布语言重新配置不受监督的维度减少问题(UDR ), 也就是说, 是一个通过另一种温和分布接近实验性概率密度函数的问题, 用美元- 维次空间支持 。 我们显示, 这项任务与另一个典型的数据科学问题相关 -- -- 足够的维度减少问题( SDR ) 。 事实上, 第一个问题的算法为第二个问题和另一个问题产生一种算法。 为了将分配的最优化问题降为普通函数的优化问题, 我们引入了一个非负性惩罚功能, 即“ 支持模型分布为美元- 维。 然后, 我们根据无限的维度低级别优化( 我们称之为交替方案), 提出尽量减少受处罚目标的算法 。 此外, 我们为问题的特殊案例设计一个高效的近似值算法, 实验性分布与模型分布之间的距离由某种类型的Mercer内核确定的最大平均值差异来测量 。 我们用合成数据和标准设置的四种例子( 3 UDR和1 SIF) 来测试我们的方法 。