Additional training of a deep learning model can cause negative effects on the results, turning an initially positive sample into a negative one (degradation). Such degradation is possible in real-world use cases due to the diversity of sample characteristics. That is, a set of samples is a mixture of critical ones which should not be missed and less important ones. Therefore, we cannot understand the performance by accuracy alone. While existing research aims to prevent a model degradation, insights into the related methods are needed to grasp their benefits and limitations. In this talk, we will present implications derived from a comparison of methods for reducing degradation. Especially, we formulated use cases for industrial settings in terms of arrangements of a data set. The results imply that a practitioner should care about better method continuously considering dataset availability and life cycle of an AI system because of a trade-off between accuracy and preventing degradation.
翻译:深层学习模式的额外培训可能会对结果产生负面影响,使最初的正面样本变成负面样本(降解),这种退化在现实世界使用案例中是可能的,因为抽样特征多种多样,也就是说,一组样本是关键样本的混合体,不应遗漏,其重要性较小,因此,我们无法仅靠准确性来理解其性能。虽然现有研究的目的是防止模型退化,但需要深入了解相关方法,以掌握其好处和局限性。在本次讨论中,我们将介绍从比较减少退化的方法中得出的影响。特别是,我们从一组数据集的安排中为工业环境制定了案例。结果意味着,由于精确性与防止退化之间的取舍,实践者应当关注如何更好地持续考虑AI系统数据集的可用性和生命周期。