In this work, we propose a novel and scalable solution to address the challenges of developing efficient dense predictions on edge platforms. Our first key insight is that MultiTask Learning (MTL) and hardware-aware Neural Architecture Search (NAS) can work in synergy to greatly benefit on-device Dense Predictions (DP). Empirical results reveal that the joint learning of the two paradigms is surprisingly effective at improving DP accuracy, achieving superior performance over both the transfer learning of single-task NAS and prior state-of-the-art approaches in MTL, all with just 1/10th of the computation. To the best of our knowledge, our framework, named EDNAS, is the first to successfully leverage the synergistic relationship of NAS and MTL for DP. Our second key insight is that the standard depth training for multi-task DP can cause significant instability and noise to MTL evaluation. Instead, we propose JAReD, an improved, easy-to-adopt Joint Absolute-Relative Depth loss, that reduces up to 88% of the undesired noise while simultaneously boosting accuracy. We conduct extensive evaluations on standard datasets, benchmark against strong baselines and state-of-the-art approaches, as well as provide an analysis of the discovered optimal architectures.
翻译:在这项工作中,我们提出了一个新颖和可扩展的解决办法,以应对在边缘平台上制定高效密集预测的挑战。我们的第一个关键洞察力是,多任务学习(MTL)和有硬件神经结构搜索(NAS)能够发挥协同作用,大大有利于高级预测(DP)。经验显示,对这两种模式的联合学习在提高DP的准确性方面效果惊人,在转让单一任务NAS和在MTL中采用先进先进方法方面都取得了优异的成绩,所有方法的计算都只有1/10。对于我们的知识而言,我们称为EDNAS的框架是第一个成功地利用NAS和MTL的协同关系促进DP。我们的第二个关键洞察力发现是,多任务DP的标准深度培训可以给MTL评估带来极大的不稳定和噪音。相反,我们建议JARED,改进、容易采用的绝对-再生深度损失联合方法,将未设计的声音降低到88%,同时提升精确度。我们根据所发现的基准进行广泛的数据评估,作为最佳基准,我们根据所发现的基准进行广泛的数据评估。