Blockprint, a tool for assessing client diversity on the Ethereum beacon chain, is essential for analyzing decentralization. This paper details experiments conducted at MigaLabs to enhance Blockprint's accuracy, evaluating various configurations for the K-Nearest Neighbors (KNN) classifier and exploring the Multi-Layer Perceptron (MLP) classifier as a proposed alternative. Findings suggest that the MLP classifier generally achieves superior accuracy with a smaller training dataset. The study revealed that clients running in different modes, especially those subscribed to all subnets, impact attestation inclusion differently, leading to proposed methods for mitigating the decline in model accuracy. Consequently, the recommendation is to employ an MLP model trained with a combined dataset of slots from both default and subscribed-to-all-subnets client configurations.
翻译:暂无翻译