Hardware peripherals such as GPUs and FPGAs are commonly available in server-grade computing to accelerate specific compute tasks, from database queries to machine learning. CSPs have integrated these accelerators into their infrastructure and let tenants combine and configure these components flexibly, based on their needs. Securing I/O interfaces is critical to ensure proper isolation between tenants in these highly complex, heterogeneous, yet shared server systems, especially in the cloud, where some peripherals may be under control of a malicious tenant. In this work, we investigate the interfaces that connect peripheral hardware components to each other and the rest of the system.We show that the I/O memory management units (IOMMUs) - intended to ensure proper isolation of peripherals - are the source of a new attack surface: the I/O translation look-aside buffer (IOTLB). We show that by using an FPGA accelerator card one can gain precise information over IOTLB activity. That information can be used for covert communication between peripherals without bothering CPU or to directly extract leakage from neighboring accelerated compute jobs such as GPU-accelerated databases. We present the first qualitative and quantitative analysis of this newly uncovered attack surface before fine-grained channels become widely viable with the introduction of CXL and PCIe 5.0. In addition, we propose possible countermeasures that software developers, hardware designers, and system administrators can use to suppress the observed side-channel leakages and analyze their implicit costs.
翻译:服务器级计算通常可以提供GPU和FPGAs等硬件外围设备,以加快具体计算任务,从数据库查询到机器学习。 CSP将这些加速器整合到基础设施中,让租户根据需要灵活地组合和配置这些组件。 建立 I/ O 界面对于确保租户在这些高度复杂、多样、但共享的服务器系统中的适当隔离至关重要, 特别是在云层中, 一些外围设备可能受到恶意租户的控制。 在这项工作中, 我们调查将外围硬件组件连接到彼此之间以及系统其他部分的界面。 我们显示, I/ O 存储管理器( IMMUs) -- -- 意在确保外部设备的适当隔离, 并且让租户根据需要灵活地组合和配置这些部件。 建立 I/ O 界面对于确保租户之间在这些高度复杂、 多样化、 但共享的服务器系统中, 特别是云层中, 某些外围设备可能受到恶意租户的控制。 信息可用于外围设备之间的秘密通信, 或直接从邻近的离子硬件组件流出。 我们观察了ILLL 在进行最新的硬质分析之前, 将新的硬质分析。</s>