Image classification with deep neural networks has reached state-of-art with high accuracy. This success is attributed to good internal representation features that bypasses the difficulties of the non-convex optimization problems. We have little understanding of these internal representations, let alone quantifying them. Recent research efforts have focused on alternative theories and explanations of the generalizability of these deep networks. We propose the alternative perturbation of deep models during their training induces changes that lead to transitions to different families. The result is an Anna Karenina Principle AKP for deep learning, in which less generalizable models unhappy families vary more in their representation than more generalizable models happy families paralleling Leo Tolstoy dictum that all happy families look alike, each unhappy family is unhappy in its own way. Anna Karenina principle has been found in systems in a wide range: from the surface of endangered corals exposed to harsh weather to the lungs of patients suffering from fatal diseases of AIDs. In our paper, we have generated artificial perturbations to our model by hot-swapping the activation and loss functions during the training. In this paper, we build a model to classify cancer cells from non-cancer ones. We give theoretical proof that the internal representations of generalizable happy models are similar in the asymptotic limit. Our experiments verify similar representations of generalizable models.
翻译:以深心神经网络的深度图像分类达到了最先进的程度。 这一成功归功于良好的内部代表特征, 避开了非convex优化问题的困难。 我们对这些内部代表特征了解甚少, 更不用说量化了。 最近的研究重点是替代理论和解释这些深心网络的普遍适用性。 我们提议在培训过程中对深层模型进行替代干扰,导致向不同家庭过渡。 结果是Anna Karenina 原则AKP 进行深层次学习, 其中较不普及的不幸福家庭模式在代表性上的差异大于更为普遍的模式, 与莱奥·托尔斯泰家庭相似的幸福家庭口号,即所有幸福家庭看起来都一样,每个不快乐的家庭都以自己的方式感到不快乐。 安娜·卡列尼纳原则在广泛的系统中找到了:从濒危珊瑚表面到患AID致命疾病的病人的肺部。 在我们的论文中,我们通过在培训期间热吸取激活和损失功能,给我们的模式造成了人为的干扰。 在这份文件中,我们建立了类似的模型, 我们从一般的实验模型中, 提供了类似的模型。