When machine learning models encounter data which is out of the distribution on which they were trained they have a tendency to behave poorly, most prominently over-confidence in erroneous predictions. Such behaviours will have disastrous effects on real-world machine learning systems. In this field graceful degradation refers to the optimisation of model performance as it encounters this out-of-distribution data. This work presents a definition and discussion of graceful degradation and where it can be applied in deployed visual systems. Following this a survey of relevant areas is undertaken, novelly splitting the graceful degradation problem into active and passive approaches. In passive approaches, graceful degradation is handled and achieved by the model in a self-contained manner, in active approaches the model is updated upon encountering epistemic uncertainties. This work communicates the importance of the problem and aims to prompt the development of machine learning strategies that are aware of graceful degradation.
翻译:当机器学习模型遇到超出其培训分布范围的数据时,他们往往表现不良,最明显的是过分相信错误的预测。这种行为将对现实世界机器学习系统产生灾难性影响。在这一领域,优雅的退化是指模型在遇到这种分布范围外的数据时的优化性能。这项工作对优雅的退化作了定义和讨论,并可在部署的视觉系统中加以应用。在对相关领域进行调查之后,将优雅的退化问题重新分为主动和被动两种。在被动做法中,优雅的退化由模型以自成一体的方式处理和实现,在积极做法中,模型在遇到认知性不确定性时更新。这项工作传达了这一问题的重要性,并旨在推动制定意识到优雅退化的机器学习战略。