Inverse problems are paramount in Science and Engineering. In this paper, we consider the setup of Statistical Inverse Problem (SIP) and demonstrate how Stochastic Gradient Descent (SGD) algorithms can be used in the linear SIP setting. We provide consistency and finite sample bounds for the excess risk. We also propose a modification for the SGD algorithm where we leverage machine learning methods to smooth the stochastic gradients and improve empirical performance. We exemplify the algorithm in a setting of great interest nowadays: the Functional Linear Regression model. In this case we consider a synthetic data example and examples with a real data classification problem.
翻译:在科学和工程中,反面问题是最重要的。在本文中,我们考虑了统计反向问题(SIP)的设置,并展示了在线性 SIP 设置中如何使用SIP 算法。我们为过度风险提供了一致性和有限的抽样界限。我们还提议了SGD 算法的修改,即我们利用机器学习方法来平滑悬浮梯度,改善经验性表现。我们用当今一个非常令人感兴趣的环境来举例说明算法:功能线性回归模型。在这种情况下,我们考虑的是具有真实数据分类问题的合成数据实例和实例。