As one of the central tasks in machine learning, regression finds lots of applications in different fields. An existing common practice for solving regression problems is the mean square error (MSE) minimization approach or its regularized variants which require prior knowledge about the models. Recently, Yi et al., proposed a mutual information based supervised learning framework where they introduced a label entropy regularization which does not require any prior knowledge. When applied to classification tasks and solved via a stochastic gradient descent (SGD) optimization algorithm, their approach achieved significant improvement over the commonly used cross entropy loss and its variants. However, they did not provide a theoretical convergence analysis of the SGD algorithm for the proposed formulation. Besides, applying the framework to regression tasks is nontrivial due to the potentially infinite support set of the label. In this paper, we investigate the regression under the mutual information based supervised learning framework. We first argue that the MSE minimization approach is equivalent to a conditional entropy learning problem, and then propose a mutual information learning formulation for solving regression problems by using a reparameterization technique. For the proposed formulation, we give the convergence analysis of the SGD algorithm for solving it in practice. Finally, we consider a multi-output regression data model where we derive the generalization performance lower bound in terms of the mutual information associated with the underlying data distribution. The result shows that the high dimensionality can be a bless instead of a curse, which is controlled by a threshold. We hope our work will serve as a good starting point for further research on the mutual information based regression.
翻译:作为机器学习的核心任务之一,回归在不同的领域找到许多应用。解决回归问题的常见做法是平均平差差(MSE)最小化方法或其常规变方,这些变方需要事先了解模型。最近,Yi等人提议了一个基于相互信息的监督学习框架,其中他们引入了一个不需要任何先前知识的标签诱变式正规化。当应用到分类任务和通过随机梯级梯级下降优化算法(SGD)解决时,它们的方法在通常使用的交叉诱变损失及其变方方面取得了显著的改进。然而,它们并没有对拟议共同拟订的SGD算法提供理论上的趋同分析。此外,对回归任务适用框架并非三边化,因为标签可能得到无限的支持。在本文件中,我们调查基于相互信息的监督学习框架下的回归。我们首先认为,最小化方法相当于一个有条件的诱变梯级学习问题,然后提出一种共同的信息学习模式,通过重新校正技术解决回归问题。我们提议的提法,我们给“SGDGD回归法”的模型的趋同点,开始分析,因为其基础的回归值是共同分析基础数据,从而获得高水平数据。最后,我们考虑“高水平分析,我们用高水平数据分析,我们用高水平数据分析,我们用一个基础数据分析,从而得出数据。