In this work, we propose a novel convolutional autoencoder based architecture to generate subspace specific feature representations that are best suited for classification task. The class-specific data is assumed to lie in low dimensional linear subspaces, which could be noisy and not well separated, i.e., subspace distance (principal angle) between two classes is very low. The proposed network uses a novel class-specific self expressiveness (CSSE) layer sandwiched between encoder and decoder networks to generate class-wise subspace representations which are well separated. The CSSE layer along with encoder/ decoder are trained in such a way that data still lies in subspaces in the feature space with minimum principal angle much higher than that of the input space. To demonstrate the effectiveness of the proposed approach, several experiments have been carried out on state-of-the-art machine learning datasets and a significant improvement in classification performance is observed over existing subspace based transformation learning methods.
翻译:在这项工作中,我们建议建立一个基于子层自动编码器的新结构,以生成最适合分类任务的子空间特定特征表示。 类特定数据假定位于低维线性子空间,这些空间可能吵闹,而且不十分分离, 即两个类别之间的子空间距离(主要角度)非常低。 拟议的网络使用一个在编码器和解密器网络之间混合的新型特定类自我表达性层, 以生成分类式子空间表示式, 这些表达式非常分离。 CSE 层与 编码器/ 解密器一起经过培训, 数据仍然位于地貌空间的子空间, 其最小主角度比输入空间高得多。 为了证明拟议方法的有效性, 已经对最新机器学习数据集进行了一些实验, 并观察了现有子空间改造学习方法的分类性表现有了重大改进。