While early empirical evidence has supported the case for learned index structures as having favourable average-case performance, little is known about their worst-case performance. By contrast, classical structures are known to achieve optimal worst-case behaviour. This work evaluates the robustness of learned index structures in the presence of adversarial workloads. To simulate adversarial workloads, we carry out a data poisoning attack on linear regression models that manipulates the cumulative distribution function (CDF) on which the learned index model is trained. The attack deteriorates the fit of the underlying ML model by injecting a set of poisoning keys into the training dataset, which leads to an increase in the prediction error of the model and thus deteriorates the overall performance of the learned index structure. We assess the performance of various regression methods and the learned index implementations ALEX and PGM-Index. We show that learned index structures can suffer from a significant performance deterioration of up to 20% when evaluated on poisoned vs. non-poisoned datasets.
翻译:虽然早期的经验证据支持了学习指数结构具有良好的平均业绩,但对最坏的成绩却知之甚少,相反,古典结构被认为能够达到最佳的最坏情况行为。这项工作评估了在出现对抗性工作量的情况下所学指数结构的稳健性。模拟对抗性工作量时,我们对利用学习指数模型所训练的累积分布功能(CDF)的线性回归模型进行数据中毒攻击。攻击通过在培训数据集中输入一套中毒键,使基本ML模型的适合性恶化,这导致模型预测错误增加,从而恶化了学习指数结构的总体业绩。我们评估了各种回归方法的绩效以及所学指数的落实情况,ALEX和PGM-Index。我们表明,在对中毒与非中毒数据集进行评价时,学习指数结构的性能可能严重恶化,达20%。