This paper proposes an automated framework for efficient application profiling and training of Machine Learning (ML) performance models, composed of two parts: OSCAR-P and aMLLibrary. OSCAR-P is an auto-profiling tool designed to automatically test serverless application workflows running on multiple hardware and node combinations in cloud and edge environments. OSCAR-P obtains relevant profiling information on the execution time of the individual application components. These data are later used by aMLLibrary to train ML-based performance models. This makes it possible to predict the performance of applications on unseen configurations. We test our framework on clusters with different architectures (x86 and arm64) and workloads, considering multi-component use-case applications. This extensive experimental campaign proves the efficiency of OSCAR-P and aMLLibrary, significantly reducing the time needed for the application profiling, data collection, and data processing. The preliminary results obtained on the ML performance models accuracy show a Mean Absolute Percentage Error lower than 30% in all the considered scenarios.
翻译:暂无翻译