In this paper, we explore bounds on the expected risk when using deep neural networks for supervised classification from an information theoretic perspective. Firstly, we introduce model risk and fitting error, which are derived from further decomposing the empirical risk. Model risk represents the expected value of the loss under the model's predicted probabilities and is exclusively dependent on the model. Fitting error measures the disparity between the empirical risk and model risk. Then, we derive the upper bound on fitting error, which links the back-propagated gradient and the model's parameter count with the fitting error. Furthermore, we demonstrate that the generalization errors are bounded by the classification uncertainty, which is characterized by both the smoothness of the distribution and the sample size. Based on the bounds on fitting error and generalization, by utilizing the triangle inequality, we establish an upper bound on the expected risk. This bound is applied to provide theoretical explanations for overparameterization, non-convex optimization and flat minima in deep learning. Finally, empirical verification confirms a significant positive correlation between the derived theoretical bounds and the practical expected risk, thereby affirming the practical relevance of the theoretical findings.
翻译:暂无翻译