We consider the problem of the estimation of a high-dimensional probability distribution from i.i.d. samples of the distribution using model classes of functions in tree-based tensor formats, a particular case of tensor networks associated with a dimension partition tree. The distribution is assumed to admit a density with respect to a product measure, possibly discrete for handling the case of discrete random variables. After discussing the representation of classical model classes in tree-based tensor formats, we present learning algorithms based on empirical risk minimization using a $L^2$ contrast. These algorithms exploit the multilinear parametrization of the formats to recast the nonlinear minimization problem into a sequence of empirical risk minimization problems with linear models. A suitable parametrization of the tensor in tree-based tensor format allows to obtain a linear model with orthogonal bases, so that each problem admits an explicit expression of the solution and cross-validation risk estimates. These estimations of the risk enable the model selection, for instance when exploiting sparsity in the coefficients of the representation. A strategy for the adaptation of the tensor format (dimension tree and tree-based ranks) is provided, which allows to discover and exploit some specific structures of high-dimensional probability distributions such as independence or conditional independence. We illustrate the performances of the proposed algorithms for the approximation of classical probabilistic models (such as Gaussian distribution, graphical models, Markov chain).
翻译:我们考虑了从i.d.d.样本中估算高维概率分布的问题,即利用基于树的色调格式的功能模型类别分布分布的模型类别来估计高维概率分布的问题,这是一个与维度分隔树相关的强力网络的特例。该分布假设承认产品计量的密度,在处理离散随机变量的情况下可能离散。在讨论传统模型类别以树为基础的色调格式表示的典型模型类别时,我们提出基于经验风险最小化的学习算法,使用$L%2美元对比值。这些算法利用多种线性格式的多线性平衡化将非线性最小化问题重新定位为线性模型的经验风险最小化问题序列。基于树的色调模式的适当对应化,允许以树本结构或树本结构获得线性模型的线性模型,从而明确表达解决办法和跨度风险估计值。这些风险估算使得模型的选择成为可能,例如,在利用基于代表性系数的松度时,可以将非线性最小性最小化问题重新定位为非线性最小化问题,将非线性风险最小化问题重新定位为线性风险最小化的风险最小化问题。