Unsupervised feature extractors are known to perform an efficient and discriminative representation of data. Insight into the mappings they perform and human ability to understand them, however, remain very limited. This is especially prominent when multilayer deep learning architectures are used. This paper demonstrates how to remove these bottlenecks within the architecture of Nonnegativity Constrained Autoencoder (NCSAE). It is shown that by using both L1 and L2 regularization that induce nonnegativity of weights, most of the weights in the network become constrained to be nonnegative thereby resulting into a more understandable structure with minute deterioration in classification accuracy. Also, this proposed approach extracts features that are more sparse and produces additional output layer sparsification. The method is analyzed for accuracy and feature interpretation on the MNIST data, the NORB normalized uniform object data, and the Reuters text categorization dataset.
翻译:已知未经监督的地物提取器能够高效和有区别地代表数据。但是,仔细观察它们所表现的绘图和人类了解这些数据的能力仍然非常有限。在使用多层深层学习结构时,这一点特别突出。本文件展示了如何消除非强化性封闭自动编码器(NCSAE)架构内的这些瓶颈。通过使用L1和L2正规化,导致重量不增强,显示网络中的多数重量被限制为非负性,从而形成一种更容易理解的结构,使分类精确度降低一分钟。此外,这一拟议方法还提取了更加稀少的特征,并产生了额外的输出层宽度。对方法进行了分析,以准确性和对MNIST数据的特征解释、NORB的标准化统一物体数据以及路透社文本分类数据集进行了分析。