Multi-Domain Learning (MDL) refers to the problem of learning a set of models derived from a common deep architecture, each one specialized to perform a task in a certain domain (e.g., photos, sketches, paintings). This paper tackles MDL with a particular interest in obtaining domain-specific models with an adjustable budget in terms of the number of network parameters and computational complexity. Our intuition is that, as in real applications the number of domains and tasks can be very large, an effective MDL approach should not only focus on accuracy but also on having as few parameters as possible. To implement this idea we derive specialized deep models for each domain by adapting a pre-trained architecture but, differently from other methods, we propose a novel strategy to automatically adjust the computational complexity of the network. To this aim, we introduce Budget-Aware Adapters that select the most relevant feature channels to better handle data from a novel domain. Some constraints on the number of active switches are imposed in order to obtain a network respecting the desired complexity budget. Experimentally, we show that our approach leads to recognition accuracy competitive with state-of-the-art approaches but with much lighter networks both in terms of storage and computation.
翻译:多域学习(MDL)是指学习一套来自共同深层结构的模型的问题,每个模型都专门从事某一领域的任务(如照片、草图、绘画等)。本文涉及MDL,特别关心从网络参数和计算复杂程度方面获得可调整预算的具体领域模型的问题。我们的直觉是,在实际应用中,领域和任务的数量可能非常大,有效的MDL方法不仅应侧重于准确性,而且应尽可能少的参数。为了落实这一理念,我们通过调整预先培训的结构,为每个领域制定专门的深层模型,但不同于其他方法,我们提出了自动调整网络计算复杂性的新战略。为此,我们引入了预算软件调整器,选择最相关的特性渠道,以更好地处理新领域的数据。对主动开关的数量施加了一些限制,以便获得一个符合所希望的复杂预算的网络。实验性,我们表明,我们的方法导致承认与先进方法的准确性竞争,但在储存和计算方面,网络均较轻得多。