Classification problems solved with deep neural networks (DNNs) typically rely on a closed world paradigm, and optimize over a single objective (e.g., minimization of the cross-entropy loss). This setup dismisses all kinds of supporting signals that can be used to reinforce the existence or absence of a particular pattern. The increasing need for models that are interpretable by design makes the inclusion of said contextual signals a crucial necessity. To this end, we introduce the notion of Self-Supervised Autogenous Learning (SSAL) models. A SSAL objective is realized through one or more additional targets that are derived from the original supervised classification task, following architectural principles found in multi-task learning. SSAL branches impose low-level priors into the optimization process (e.g., grouping). The ability of using SSAL branches during inference, allow models to converge faster, focusing on a richer set of class-relevant features. We show that SSAL models consistently outperform the state-of-the-art while also providing structured predictions that are more interpretable.
翻译:由深神经网络(DNN)解决的分类问题通常依赖于封闭的世界范式,并优化了单一目标(例如,尽量减少跨热带损失)。这种设置排除了各种支持信号,这些信号可用于加强某一模式的存在或不存在。对可设计解释的模型的日益需要使得纳入上述背景信号成为一项至关重要的需要。为此,我们引入了自我监督自动学习模式的概念。一个SSAL目标是通过一个或一个以上的额外目标实现的,这些目标来自最初的监督分类任务,遵循多任务学习中发现的建筑原则。SSAL分支对优化进程规定了低层次的先行(例如,分组)。在推断过程中使用SSAL分支的能力使得模型能够更快地结合,侧重于更丰富的类别相关特征。我们显示SSAL模型始终超越了最新技术,同时也提供了更可解释的结构化的预测。