Cyber-physical systems (CPSs) are widespread in critical domains, and significant damage can be caused if an attacker is able to modify the code of their programmable logic controllers (PLCs). Unfortunately, traditional techniques for attesting code integrity (i.e. verifying that it has not been modified) rely on firmware access or roots-of-trust, neither of which proprietary or legacy PLCs are likely to provide. In this paper, we propose a practical code integrity checking solution based on privacy-preserving black box models that instead attest the input/output behaviour of PLC programs. Using faithful offline copies of the PLC programs, we identify their most important inputs through an information flow analysis, execute them on multiple combinations to collect data, then train neural networks able to predict PLC outputs (i.e. actuator commands) from their inputs. By exploiting the black box nature of the model, our solution maintains the privacy of the original PLC code and does not assume that attackers are unaware of its presence. The trust instead comes from the fact that it is extremely hard to attack the PLC code and neural networks at the same time and with consistent outcomes. We evaluated our approach on a modern six-stage water treatment plant testbed, finding that it could predict actuator states from PLC inputs with near-100% accuracy, and thus could detect all 120 effective code mutations that we subjected the PLCs to. Finally, we found that it is not practically possible to simultaneously modify the PLC code and apply discreet adversarial noise to our attesters in a way that leads to consistent (mis-)predictions.
翻译:网络物理系统(CPS)在关键领域很普遍,如果攻击者能够修改其可编程逻辑控制器(PLC)的代码代码,则可能造成重大损害。 不幸的是,测试代码完整性的传统技术(即核实它没有被修改)依赖于固态软件访问或信任根基,无论是专有的还是遗留的PLC都不可能提供。在本文件中,我们提议基于隐私保存黑盒模型的实用代码完整性检查解决方案,而不是证明PLC程序的投入/产出行为。使用PLC程序忠实的离线副本,我们通过信息流分析确定其最重要的投入,用多种组合执行这些技术来收集数据,然后培训能够预测 PLC 输出的神经网络(即动作指令) 。通过利用模型的黑盒性质,我们的解决方案维护了原始的PLC代码的隐私,并且不认为攻击者并不同时不知道它的存在。 信任来自以下事实:我们很难攻击PLC代码代码和神经网络的准确性应用,因此,我们能够用现代的RCRC系统测试结果来持续地测量。