Deep learning models in large-scale machine learning systems are often continuously trained with enormous data from production environments. The sheer volume of streaming training data poses a significant challenge to real-time training subsystems and ad-hoc sampling is the standard practice. Our key insight is that these deployed ML systems continuously perform forward passes on data instances during inference, but ad-hoc sampling does not take advantage of this substantial computational effort. Therefore, we propose to record a constant amount of information per instance from these forward passes. The extra information measurably improves the selection of which data instances should participate in forward and backward passes. A novel optimization framework is proposed to analyze this problem and we provide an efficient approximation algorithm under the framework of Mini-batch gradient descent as a practical solution. We also demonstrate the effectiveness of our framework and algorithm on several large-scale classification and regression tasks, when compared with competitive baselines widely used in industry.
翻译:大规模机器学习系统中的深层学习模型往往不断得到来自生产环境的大量数据的培训。大量流成培训数据对实时培训子系统和临时抽样构成了重大挑战。我们的主要见解是,这些部署的ML系统在推论期间不断对数据实例进行前方传导,但临时热采样没有利用这一巨大的计算努力。因此,我们提议记录从这些远端传出的每一实例的不断数量的信息。额外信息可明显改进了数据实例应参与前向和后向传的选定方法。我们提议了一个新的优化框架来分析这一问题,并在微型批量梯度梯度下降的框架内提供一种高效的近似算法,作为切实可行的解决办法。我们还展示了我们在若干大规模分类和回归任务方面的框架和算法的有效性,与工业广泛使用的竞争性基线相比,我们还展示了这些大规模分类和回归任务的框架和算法的有效性。