From the moment Neural Networks dominated the scene for image processing, the computational complexity needed to solve the targeted tasks skyrocketed: against such an unsustainable trend, many strategies have been developed, ambitiously targeting performance's preservation. Promoting sparse topologies, for example, allows the deployment of deep neural networks models on embedded, resource-constrained devices. Recently, Capsule Networks were introduced to enhance explainability of a model, where each capsule is an explicit representation of an object or its parts. These models show promising results on toy datasets, but their low scalability prevents deployment on more complex tasks. In this work, we explore sparsity besides capsule representations to improve their computational efficiency by reducing the number of capsules. We show how pruning with Capsule Network achieves high generalization with less memory requirements, computational effort, and inference and training time.
翻译:从神经网络主宰图像处理的场景开始,解决目标任务所需的计算复杂性急剧上升:面对这种不可持续的趋势,已经制定了许多战略,目标远大,目标是保护业绩。例如,推广稀有的地形学,允许在嵌入和资源受限制的装置上部署深神经网络模型。最近,引入了Capsule网络以提高模型的可解释性,在模型中,每个胶囊都是一个物体或其部件的清晰表示。这些模型在玩具数据集上显示出有希望的结果,但是它们的低缩放性阻止了更复杂的任务的部署。在这项工作中,我们探索除了胶囊代表外的宽度,通过减少胶囊的数量来提高它们的计算效率。我们展示了与Capsule网络的运行如何在记忆要求、计算努力以及推断和培训时间方面实现高度的普通化。