Message brokers see widespread adoption in modern IT landscapes. These systems are comparatively easy to use and configure and thus, present a flexible solution for various data storage scenarios. Their ability to scale horizontally enables users to adapt to growing data volumes and changing environments. However, one of the main challenges concerning message brokers is the danger of them becoming a bottleneck within an IT architecture. To prevent this, the amount of data a given message broker can handle with a specific configuration needs to be known. In this paper, we propose a monitoring architecture for message brokers and similar systems. We present a comprehensive performance analysis of Apache Kafka, a popular message broker implementation, using our approach. As part of the benchmark, we study selected data producer settings and their impact on the achievable data ingestion rate.
翻译:这些系统相对容易使用和配置,因此为各种数据储存方案提供了一个灵活的解决方案。它们能够横向扩大用户的规模,使其适应不断增长的数据量和不断变化的环境。然而,信息经纪人面临的一个主要挑战是,他们有可能在信息技术架构中成为瓶颈。为了防止这种情况,需要了解特定信息经纪人能够用特定配置处理的数据数量。在本文件中,我们提议为信息经纪人和类似系统建立一个监测架构。我们用我们的方法对流行的信息经纪人Apache Kafka进行了全面的业绩分析。我们研究选定的数据制作人设置及其对可实现的数据摄取率的影响,以此作为基准的一部分。