In the supervised learning domain, considering the recent prevalence of algorithms with high computational cost, the attention is steering towards simpler, lighter, and less computationally extensive training and inference approaches. In particular, randomized algorithms are currently having a resurgence, given their generalized elementary approach. By using randomized neural networks, we study distributed classification, which can be employed in situations were data cannot be stored at a central location nor shared. We propose a more efficient solution for distributed classification by making use of a lossy compression approach applied when sharing the local classifiers with other agents. This approach originates from the framework of hyperdimensional computing, and is adapted herein. The results of experiments on a collection of datasets demonstrate that the proposed approach has usually higher accuracy than local classifiers and getting close to the benchmark - the centralized classifier. This work can be considered as the first step towards analyzing the variegated horizon of distributed randomized neural networks.
翻译:在监督的学习领域,考虑到最近普遍存在计算成本高的算法,注意力正在转向更简单、更轻和较少计算广泛的培训和推论方法。特别是,随机算法目前由于其普遍的基本方法而正在复苏。通过使用随机神经网络,我们研究分布式分类方法,在情况中可以使用的数据不能储存在中央地点,也不能共享。我们建议一种更有效的分配分类解决办法,在与其他代理商共享本地分类法时采用损失压缩法。这种方法源自超维计算框架,并在此加以调整。关于数据集收集的实验结果显示,拟议方法通常比本地分类者更准确,接近基准,即集中分类者。这项工作可以被视为分析分布式随机神经网络的变异地平线的第一步。