Machine Learning (ML) is more than just training models, the whole workflow must be considered. Once deployed, a ML model needs to be watched and constantly supervised and debugged to guarantee its validity and robustness in unexpected situations. Debugging in ML aims to identify (and address) the model weaknesses in not trivial contexts. Several techniques have been proposed to identify different types of model weaknesses, such as bias in classification, model decay, adversarial attacks, etc., yet there is not a generic framework that allows them to work in a collaborative, modular, portable, iterative way and, more importantly, flexible enough to allow both human- and machine-driven techniques. In this paper, we propose a novel containerized directed graph framework to support and accelerate end-to-end ML workflow management, supervision, and debugging. The framework allows defining and deploying ML workflows in containers, tracking their metadata, checking their behavior in production, and improving the models by using both learned and human-provided knowledge. We demonstrate these capabilities by integrating in the framework two hybrid systems to detect data drift distribution which identify the samples that are far from the latent space of the original distribution, ask for human intervention, and whether retrain the model or wrap it with a filter to remove the noise of corrupted data at inference time. We test these systems on MNIST-C, CIFAR-10-C, and FashionMNIST-C datasets, obtaining promising accuracy results with the help of human involvement.
翻译:机器学习(ML)不仅仅是培训模式,还必须考虑整个工作流程。一旦部署,ML模式需要不断观察、监管和调试,以保证其在意外情况下的有效性和稳健性。ML调试的目的是在并非微不足道的情况下查明(和解决)模型的弱点。已经提出一些技术来查明不同类型的模式弱点,如分类偏差、模式衰减、对抗性攻击等,然而,没有一个通用框架允许他们以协作、模块化、便携式、迭接方式开展工作,更重要的是,需要足够灵活,以允许人和机器驱动的技术。在本文件中,我们提议了一个新的集装箱化定向图表框架,以支持和加速ML工作流程的管理、监督和调试。该框架允许在集装箱中定义和部署ML工作流程,跟踪其元数据,检查其生产行为,并利用人文知识改进模型。我们通过在框架中整合两个混合系统来显示这些能力,以探测数据流分配的样本,这些样本远离MRMR的潜伏空间,在原始分配中要求进行人文干预,是否测试这些数据系统,是否在进行数据测试。我们通过测试这些系统进行数据转换,我们通过对数据进行数据转换,是否进行数据测试。