Machine-learning systems such as self-driving cars or virtual assistants are composed of a large number of machine-learning models that recognize image content, transcribe speech, analyze natural language, infer preferences, rank options, etc. These systems can be represented as directed acyclic graphs in which each vertex is a model, and models feed each other information over the edges. Oftentimes, the models are developed and trained independently, which raises an obvious concern: Can improving a machine-learning model make the overall system worse? We answer this question affirmatively by showing that improving a model can deteriorate the performance of downstream models, even after those downstream models are retrained. Such self-defeating improvements are the result of entanglement between the models. We identify different types of entanglement and demonstrate via simple experiments how they can produce self-defeating improvements. We also show that self-defeating improvements emerge in a realistic stereo-based object detection system.
翻译:自驾驶汽车或虚拟助理等机学系统由大量机械学习模型组成,这些模型承认图像内容、录音、分析自然语言、推论偏好、排名选项等。这些系统可以作为定向环形图加以体现,其中每个顶点都是模型,模型在边缘相互传递信息。通常,这些模型是独立开发和培训的,这引起了明显的关注:改进机器学习模型能够使整个系统更加糟糕吗?我们肯定地回答这个问题,显示改进模型可以恶化下游模型的性能,即使在这些下游模型经过再培训之后。这些自败性改进是模型相互纠缠的结果。我们确定不同类型的纠缠,并通过简单的实验来证明它们如何产生自毁改进。我们还表明,在现实的立体物体探测系统中出现了自毁式改进。