The recent developments and research in distributed ledger technologies and blockchain have contributed to the increasing adoption of distributed systems. To collect relevant insights into systems' behavior, we observe many evaluation frameworks focusing mainly on the system under test throughput. However, these frameworks often need more comprehensiveness and generality, particularly in adopting a distributed applications' cross-layer approach. This work analyses in detail the requirements for distributed systems assessment. We summarize these findings into a structured methodology and experimentation framework called TURBO. Our approach emphasizes setting up and assessing a broader spectrum of distributed systems and addresses a notable research gap. We showcase the effectiveness of the framework by evaluating four distinct systems and their interaction, leveraging a diverse set of eight carefully selected metrics and 12 essential parameters. Through experimentation and analysis we demonstrate the framework's capabilities to provide valuable insights across various use cases. For instance, we identify that a combination of Trusted Execution Environments with threshold signature scheme FROST introduces minimal overhead on the performance with average latency around \SI{40}{\ms}. We showcase an emulation of realistic systems behavior, e.g., Maximal Extractable Value is possible and could be used to further model such dynamics. The TURBO framework enables a deeper understanding of distributed systems and is a powerful tool for researchers and practitioners navigating the complex landscape of modern computing infrastructures.
翻译:暂无翻译