Machine learning models commonly exhibit unexpected failures post-deployment due to either data shifts or uncommon situations in the training environment. Domain experts typically go through the tedious process of inspecting the failure cases manually, identifying failure modes and then attempting to fix the model. In this work, we aim to standardise and bring principles to this process through answering two critical questions: (i) how do we know that we have identified meaningful and distinct failure types?; (ii) how can we validate that a model has, indeed, been repaired? We suggest that the quality of the identified failure types can be validated through measuring the intra- and inter-type generalisation after fine-tuning and introduce metrics to compare different subtyping methods. Furthermore, we argue that a model can be considered repaired if it achieves high accuracy on the failure types while retaining performance on the previously correct data. We combine these two ideas into a principled framework for evaluating the quality of both the identified failure subtypes and model repairment. We evaluate its utility on a classification and an object detection tasks. Our code is available at https://github.com/Rokken-lab6/Failure-Analysis-and-Model-Repairment
翻译:在这项工作中,我们的目标是通过回答两个关键问题,实现标准化并将原则纳入这一进程:(一) 我们如何知道我们已查明有意义和独特的故障类型? (二) 我们如何证实一个模型确实已经修复?我们建议,通过在微调后测量内部和类型间一般化并采用指标比较不同的亚型方法,可以验证所查明的故障类型的质量。此外,我们主张,如果模型在保持先前正确数据的性能的同时,在故障类型上达到高度准确性能,就可以认为该模型已经修复。我们将这两个想法合并成一个原则框架,用以评价所查明的故障子类型和模型修复的质量。我们评估其在分类和物体探测任务方面的效用。我们的代码可在https://github.com/Rokkkken-lab.6/Failure-Adelanal-Arrisal-Anament上查阅。我们可在https://github.com/Rokkkken-Failure-Anamental-Arrisal-Anament-Anamentment)