Message brokers see widespread adoption in modern IT landscapes, with Apache Kafka being one of the most employed platforms. These systems feature well-defined APIs for use and configuration and present flexible solutions for various data storage scenarios. Their ability to scale horizontally enables users to adapt to growing data volumes and changing environments. However, one of the main challenges concerning message brokers is the danger of them becoming a bottleneck within an IT architecture. To prevent this, knowledge about the amount of data a message broker using a specific configuration can handle needs to be available. In this paper, we propose a monitoring architecture for message brokers and similar Java Virtual Machine-based systems. We present a comprehensive performance analysis of the popular Apache Kafka platform using our approach. As part of the benchmark, we study selected data ingestion scenarios with respect to their maximum data ingestion rates. The results show that we can achieve an ingestion rate of about 420,000 messages/second on the used commodity hardware and with the developed data sender tool.
翻译:信息经纪人看到现代信息技术环境中广泛采用信息信息,阿帕奇·卡夫卡是使用最多的平台之一。这些系统有定义明确的用于使用和配置的API系统,为各种数据储存情景提供了灵活的解决方案。它们能够横向地使用户适应不断增长的数据量和不断变化的环境。然而,信息经纪人面临的主要挑战之一是,他们有可能在信息技术架构中成为瓶颈。为了防止这种情况,需要掌握信息经纪人使用特定配置处理的数据数量的知识。在本文中,我们建议为信息经纪人和类似的爪哇虚拟机器系统建立一个监测架构。我们用我们的方法对流行的阿帕奇·卡夫卡平台进行了全面的业绩分析。作为基准的一部分,我们研究有关其最大数据摄取率的某些数据摄取情况。结果显示,我们可以在使用过的商品硬件和开发的数据发送工具上达到大约420,000条信息/秒的摄取率。