Prior work has successfully incorporated optimization layers as the last layer in neural networks for various problems, thereby allowing joint learning and planning in one neural network forward pass. In this work, we identify a weakness in such a set-up where inputs to the optimization layer lead to undefined output of the neural network. Such undefined decision outputs can lead to possible catastrophic outcomes in critical real time applications. We show that an adversary can cause such failures by forcing rank deficiency on the matrix fed to the optimization layer which results in the optimization failing to produce a solution. We provide a defense for the failure cases by controlling the condition number of the input matrix. We study the problem in the settings of synthetic data, Jigsaw Sudoku, and in speed planning for autonomous driving, building on top of prior frameworks in end-to-end learning and optimization. We show that our proposed defense effectively prevents the framework from failing with undefined output. Finally, we surface a number of edge cases which lead to serious bugs in popular equation and optimization solvers which can be abused as well.
翻译:先前的工作成功地将优化层纳入神经网络中,成为解决各种问题的最后一层, 从而允许在一个神经网络中进行联合学习和规划。 在这项工作中, 我们发现这样一个设置中的弱点, 向优化层的投入导致神经网络的未定义输出。 这种未定义的决定输出可能导致关键实时应用中可能出现的灾难性结果。 我们显示, 对手可以通过迫使输入到优化层的矩阵中排名不足导致这种失败, 从而导致优化无法产生解决方案。 我们通过控制输入矩阵的条件号来为失败案例提供防御。 我们研究了合成数据设置中的问题, 吉素苏多库, 以及在自动驱动的快速规划中, 在端到端学习和优化的先前框架之上。 我们表明, 我们提议的防御可以有效防止框架因未定义输出而失败。 最后, 我们展示了一些边缘案例, 导致在大众方程式和优化解答器中出现严重的错误, 而这些错误也可以被滥用 。