This paper explores a prevailing trend in the industry: migrating data-intensive analytics applications from on-premises to cloud-native environments. We find that the unique cost models associated with cloud-based storage necessitate a more nuanced understanding of optimizing performance. Specifically, based on traces collected from Uber's Presto fleet in production, we argue that common I/O optimizations, such as table scan and filter, and broadcast join, may lead to unexpected costs when naively applied in the cloud. This is because traditional I/O optimizations mainly focus on improving throughput or latency in on-premises settings, without taking into account the monetary costs associated with storage API calls. In cloud environments, these costs can be significant, potentially involving billions of API calls per day just for Presto workloads at Uber scale. Presented as a case study, this paper serves as a starting point for further research to design efficient I/O strategies specifically tailored for data-intensive applications in cloud settings.
翻译:暂无翻译