Robot apps are becoming more automated, complex and diverse. An app usually consists of many functions, interacting with each other and the environment. This allows robots to conduct various tasks. However, it also opens a new door for cyber attacks: adversaries can leverage these interactions to threaten the safety of robot operations. Unfortunately, this issue is rarely explored in past works. We present the first systematic investigation about the function interactions in common robot apps. First, we disclose the potential risks and damages caused by malicious interactions. We introduce a comprehensive graph to model the function interactions in robot apps by analyzing 3,100 packages from the Robot Operating System (ROS) platform. From this graph, we identify and categorize three types of interaction risks. Second, we propose RTron, a novel system to detect and mitigate these risks and protect the operations of robot apps. We introduce security policies for each type of risks, and design coordination nodes to enforce the policies and regulate the interactions. We conduct extensive experiments on 110 robot apps from the ROS platform and two complex apps (Baidu Apollo and Autoware) widely adopted in industry. Evaluation results indicated RTron can correctly identify and mitigate all potential risks with negligible performance cost. To validate the practicality of the risks and solutions, we implement and evaluate RTron on a physical UGV (Turtlebot) with real-word apps and environments.
翻译:机器人应用程序正在变得更加自动化、复杂和多样化。 一个应用程序通常包含许多功能, 彼此互动和环境。 它允许机器人执行各种任务。 但是, 它也为网络攻击打开了新的大门: 对手可以利用这些互动来威胁机器人操作的安全。 不幸的是, 这个问题在以往的作品中很少被探讨。 我们首次对普通机器人应用程序中的功能互动进行系统调查。 首先, 我们披露恶意互动的潜在风险和损害。 我们从机器人操作系统平台上分析3100个软件包, 以模拟机器人应用程序中的功能互动。 我们从这个图中找出并分类三种互动风险。 其次, 我们建议RTron, 一个用于检测和减轻这些风险并保护机器人应用程序操作的新系统。 我们为每一种风险引入安全政策, 设计协调节点, 以强制执行政策并规范互动。 我们从ROS平台上对110个机器人应用程序进行了广泛的实验, 以及两个在工业中广泛采用的复杂应用程序( ABYUME Asall and Autwary ) 。 评价结果显示, 我们能够正确识别和减轻所有潜在风险的RT 。