In this paper, we derive upper bounds on generalization errors for deep neural networks with Markov datasets. These bounds are developed based on Koltchinskii and Panchenko's approach for bounding the generalization error of combined classifiers with i.i.d. datasets. The development of new symmetrization inequalities in high-dimensional probability for Markov chains is a key element in our extension, where the pseudo-spectral gap of the infinitesimal generator of the Markov chain plays as a key parameter in these inequalities. We also propose a simple method to convert these bounds and other similar ones in traditional deep learning and machine learning to Bayesian counterparts for both i.i.d. and Markov datasets. Extensions to $m$-order homogeneous Markov chains such as AR and ARMA models and mixtures of several Markov data services are given, where the spectral method in functional analysis is used to derive these results.
翻译:在本文中,我们用Markov数据集来得出深神经网络一般化错误的上限。 这些界限是根据Koltchinskii和Panchenko将混合分类器与i.d.数据集的通用错误捆绑起来的方法开发的。 开发Markov链系高维概率的新的平衡化不平等是我们扩展中的一个关键要素, 在那里, Markov链链的无限微量生成器的假光谱差距作为这些不平等的关键参数。 我们还提出了一个简单的方法, 将这些界限和传统深层学习和机器学习中的其他类似界限转换为Bayesian对应方, 用于i. d. 和 Markov 数据集。 扩展至按美元排序的同质马可夫链, 如AR和ARMA模型, 以及若干Markov数据服务的混合物, 在那里, 使用功能分析中的光谱方法来得出这些结果 。