Distributed dataflow systems enable the use of clusters for scalable data analytics. However, selecting appropriate cluster resources for a processing job is often not straightforward. Performance models trained on historical executions of a concrete job are helpful in such situations, yet they are usually bound to a specific job execution context (e.g. node type, software versions, job parameters) due to the few considered input parameters. Even in case of slight context changes, such supportive models need to be retrained and cannot benefit from historical execution data from related contexts. This paper presents Bellamy, a novel modeling approach that combines scale-outs, dataset sizes, and runtimes with additional descriptive properties of a dataflow job. It is thereby able to capture the context of a job execution. Moreover, Bellamy is realizing a two-step modeling approach. First, a general model is trained on all the available data for a specific scalable analytics algorithm, hereby incorporating data from different contexts. Subsequently, the general model is optimized for the specific situation at hand, based on the available data for the concrete context. We evaluate our approach on two publicly available datasets consisting of execution data from various dataflow jobs carried out in different environments, showing that Bellamy outperforms state-of-the-art methods.
翻译:分布式数据流系统能够使用可缩放数据分析的组群。 但是,为处理工作选择适当的组群资源往往不是简单易行的。 在这样的情况下,经过具体工作的历史执行过程培训的性能模型很有帮助,但是由于很少考虑输入参数,它们通常受特定的工作执行环境的约束(例如节点类型、软件版本、工作参数)。即使情况略有变化,这种支持模型也需要重新培训,无法从相关背景的历史执行数据中受益。本文展示了贝拉米,这是一种新型的模型方法,结合了数据流工作的缩放、数据集大小和运行时间的描述性能。因此,它能够捕捉到工作执行的背景。此外,贝拉米正在实现一种两步建模方法。首先,对所有可用数据都进行了一般模型的培训,用于具体的可缩放式分析算法,从而将不同背景的数据纳入不同的背景数据。随后,根据具体背景现有数据,根据手头的具体情况优化了一般模型。我们评估了两种可公开使用的数据流方法,即从不同数据流中显示不同数据流环境执行的数据集。