This paper revisits the problem of decomposing a positive semidefinite matrix as a sum of a matrix with a given rank plus a sparse matrix. An immediate application can be found in portfolio optimization, when the matrix to be decomposed is the covariance between the different assets in the portfolio. Our approach consists in representing the low-rank part of the solution as the product $MM^{T}$, where $M$ is a rectangular matrix of appropriate size, parametrized by the coefficients of a deep neural network. We then use a gradient descent algorithm to minimize an appropriate loss function over the parameters of the network. We deduce its convergence rate to a local optimum from the Lipschitz smoothness of our loss function. We show that the rate of convergence grows polynomially in the dimensions of the input, output, and the size of each of the hidden layers.
翻译:本文再次探讨了将正半无限制矩阵作为带有某种等级加上稀薄矩阵的矩阵总和来分解成正半无限制矩阵的问题。当要拆解的矩阵是投资组合中不同资产之间的共差时,可以在组合优化中找到一个直接应用。我们的方法是将解决方案的低端部分作为产品$MM ⁇ T$($MM ⁇ T$)来代表解决方案的低端部分,即$MM$($M$)是一个适当大小的矩形矩阵,以深神经网络的系数作比对。然后我们使用梯度下降算法来尽量减少网络参数的适当损失功能。我们从我们损失功能的利普施函数的平稳性中推导出其本地最佳合并率。我们表明,在投入、产出和每个隐藏层的大小方面,汇合率是多元增长的。