The use of orthogonal projections on high-dimensional input and target data in learning frameworks is studied. First, we investigate the relations between two standard objectives in dimension reduction, maximizing variance and preservation of pairwise relative distances. The derivation of their asymptotic correlation and numerical experiments tell that a projection usually cannot satisfy both objectives. In a standard classification problem we determine projections on the input data that balance them and compare subsequent results. Next, we extend our application of orthogonal projections to deep learning frameworks. We introduce new variational loss functions that enable integration of additional information via transformations and projections of the target data. In two supervised learning problems, clinical image segmentation and music information classification, the application of the proposed loss functions increase the accuracy.
翻译:研究了在学习框架中对高维投入和目标数据进行正方位预测的情况。首先,我们调查了两个标准目标之间的关系,这两个标准目标分别是:减少维度、尽可能扩大差异和保持双向相对距离;从无症状相关性和数字实验中得出的结果表明,预测通常不能满足这两个目标;在标准分类问题中,我们确定对平衡它们和比较随后结果的输入数据的预测;接着,我们将正方位预测的应用扩大到深层学习框架;我们引入了新的变式损失功能,通过转换和预测目标数据将更多的信息整合起来;在两个受监督的学习问题中,临床图像分割和音乐信息分类,应用拟议的损失功能提高了准确性。