Low latency services such as credit-card fraud detection and website targeted advertisement rely on Big Data platforms (e.g., Lucene, Graphchi, Cassandra) which run on top of memory managed runtimes, such as the JVM. These platforms, however, suffer from unpredictable and unacceptably high pause times due to inadequate memory management decisions (e.g., allocating objects with very different lifetimes next to each other, resulting in memory fragmentation). This leads to long and frequent application pause times, breaking Service Level Agreements (SLAs). This problem has been previously identified and results show that current memory management techniques are ill-suited for applications that hold in memory massive amounts of middle to long-lived objects (which is the case for a wide spectrum of Big Data applications). Previous works try to reduce such application pauses by allocating objects off-heap or in special allocation regions/generations, thus alleviating the pressure on memory management. However, all these solutions require a combination of programmer effort and knowledge, source code access, or off-line profiling, with clear negative impact on programmer productivity and/or application performance. This paper presents ROLP, a runtime object lifetime profiling system. ROLP profiles application code at runtime in order to identify which allocation contexts create objects with middle to long lifetimes, given that such objects need to be handled differently (regarding short-lived ones). This profiling information greatly improves memory management decisions, leading to long tail latencies reduction of up to 51% for Lucene, 85% for GraphChi, and 60% for Cassandra, with negligible throughput and memory overhead. ROLP is implemented for the OpenJDK 8 HotSpot JVM and it does not require any programmer effort or source code access.
翻译:信用卡欺诈检测和网站定向广告等低延迟服务依赖大数据平台(如Lucene、Greaphchi、Cassandra),这些平台运行在记忆管理运行时间(如JVM)。 然而,由于记忆管理决策不力,这些平台受到不可预测和令人无法接受的高暂停时间的影响(例如,将不同寿命对象分配在彼此之间,从而导致记忆破碎)。这导致应用程序长时间和频繁的暂停时间,违反服务级协议(SLAAs)。 这一问题以前已经查明,结果显示,当前记忆管理技术不适合在记忆管理运行中存储大量中长寿命物体的应用(如大数据应用程序的宽度) 。 然而,以往的工作试图减少这种应用的停顿时间,通过分配超高寿命或特殊分配区域/代号,从而减轻记忆管理的压力。 然而,所有这些解决方案都需要程序努力和知识、源码访问或脱线剖析,对程序生产率和/或长寿命应用绩效产生明显的负面影响。