Multi-task learning is an open and challenging problem in computer vision. The typical way of conducting multi-task learning with deep neural networks is either through handcrafted schemes that share all initial layers and branch out at an adhoc point, or through separate task-specific networks with an additional feature sharing/fusion mechanism. Unlike existing methods, we propose an adaptive sharing approach, called AdaShare, that decides what to share across which tasks to achieve the best recognition accuracy, while taking resource efficiency into account. Specifically, our main idea is to learn the sharing pattern through a task-specific policy that selectively chooses which layers to execute for a given task in the multi-task network. We efficiently optimize the task-specific policy jointly with the network weights, using standard back-propagation. Experiments on several challenging and diverse benchmark datasets with a variable number of tasks well demonstrate the efficacy of our approach over state-of-the-art methods. Project page: https://cs-people.bu.edu/sunxm/AdaShare/project.html.
翻译:多任务学习是计算机愿景中一个开放和具有挑战性的问题。与深层神经网络进行多任务学习的典型方式是,要么通过手动计划,在临时点分享所有初始层和分支,要么通过另外的特性共享/融合机制,单独的任务特定网络。与现有方法不同,我们建议采用适应性共享办法,称为Adashare,在考虑资源效率的同时,决定哪些任务可以共享,以实现最佳的认知准确性。具体地说,我们的主要想法是通过特定任务政策学习共享模式,该政策有选择地选择为多任务网络中的一项特定任务执行哪一层。我们利用标准的反向调整,有效地优化任务特定政策与网络权重相结合的任务特定政策。对若干具有挑战性和多样性的基准数据集进行实验,其任务数量各异,很好地展示了我们应对国家-艺术方法的方法的有效性。项目网页:https://cs-perman.bu.edu/sunxm/AdaShare/project.html。