When dealing with deep neural network (DNN) applications on edge devices, continuously updating the model is important. Although updating a model with real incoming data is ideal, using all of them is not always feasible due to limits, such as labeling and communication costs. Thus, it is necessary to filter and select the data to use for training (i.e., active learning) on the device. In this paper, we formalize a practical active learning problem for DNNs on edge devices and propose a general task-agnostic framework to tackle this problem, which reduces it to a stream submodular maximization. This framework is light enough to be run with low computational resources, yet provides solutions whose quality is theoretically guaranteed thanks to the submodular property. Through this framework, we can configure data selection criteria flexibly, including using methods proposed in previous active learning studies. We evaluate our approach on both classification and object detection tasks in a practical setting to simulate a real-life scenario. The results of our study show that the proposed framework outperforms all other methods in both tasks, while running at a practical speed on real devices.
翻译:处理边缘装置的深神经网络应用时, 持续更新模型很重要。 尽管更新一个包含真实输入数据的模型是理想的, 但使用所有这些模型并非总是可行的, 因为标签和通信成本等限制, 因此有必要筛选和选择用于该设备培训( 即积极学习) 的数据。 在本文件中, 我们正式确定边缘装置上的 DNN的实用积极学习问题, 并提议一个处理该问题的一般任务- 不可知性框架, 从而将它降为流子模块最大化。 这个框架足够轻, 可以用低计算资源运行, 但也提供了在理论上由于子模块属性而保证质量的解决方案。 通过这个框架, 我们可以灵活地配置数据选择标准, 包括使用先前积极学习研究中提议的方法。 我们评估了我们在模拟真实生活情景的实用环境中对分类和对象检测任务的方法。 我们的研究结果显示, 拟议的框架在真实设备上以实际速度运行时, 超越了两个任务中的所有其它方法 。