Solving large tensor linear systems poses significant challenges due to the high volume of data stored, and it only becomes more challenging when some of the data is missing. Recently, Ma et al. showed that this problem can be tackled using a stochastic gradient descent-based method, assuming that the missing data follows a uniform missing pattern. We adapt the technique by modifying the update direction, showing that the method is applicable under other missing data models. We prove convergence results and experimentally verify these results on synthetic data.
翻译:暂无翻译