Modal regression, a widely used regression protocol, has been extensively investigated in statistical and machine learning communities due to its robustness to outliers and heavy-tailed noises. Understanding modal regression's theoretical behavior can be fundamental in learning theory. Despite significant progress in characterizing its statistical property, the majority of the results are based on the assumption that samples are independent and identical distributed (i.i.d.), which is too restrictive for real-world applications. This paper concerns the statistical property of regularized modal regression (RMR) within an important dependence structure - Markov dependent. Specifically, we establish the upper bound for RMR estimator under moderate conditions and give an explicit learning rate. Our results show that the Markov dependence impacts on the generalization error in the way that sample size would be discounted by a multiplicative factor depending on the spectral gap of underlying Markov chain. This result shed a new light on characterizing the theoretical underpinning for robust regression.
翻译:模型回归是一个广泛使用的回归协议,由于对外部异常和重尾噪声的坚固性,在统计和机器学习界进行了广泛调查。理解模型回归的理论行为在学习理论中具有根本意义。尽管在统计属性的定性方面取得重大进展,但大多数结果所依据的假设是,样本是独立和相同的分布(即d.d.),对于真实世界的应用来说,这种分布过于严格。本文件涉及在一个重要的依赖性结构----Markov - 依赖性结构中常规模式回归(RMR)的统计属性。具体地说,我们在中度条件下为RMR估测器设定了上限,并给出了明确的学习率。我们的结果显示,Markov依赖性对一般误差的影响是,抽样大小会因基底的Markov链的光谱差距而被乘以乘数性因素折扣。这一结果为强力回归理论基础的特征提供了新的启发。