Recent years have witnessed a surge in deep learning research, marked by the introduction of expansive generative models like OpenAI's SORA and GPT, Meta AI's LLAMA series, and Google's FLAN, BART, and Gemini models. However, the rapid advancement of large models (LM) has intensified the demand for computing resources, particularly GPUs, which are crucial for their parallel processing capabilities. This demand is exacerbated by limited GPU availability due to supply chain delays and monopolistic acquisition by major tech firms. Distributed Machine Learning (DML) methods, such as Federated Learning (FL), mitigate these challenges by partitioning data and models across multiple servers, though implementing optimizations like tensor and pipeline parallelism remains complex. Blockchain technology emerges as a promising solution, ensuring data integrity, scalability, and trust in distributed computing environments, but still lacks guidance on building practical DML systems. In this paper, we propose a \textit{trustworthy distributed machine learning} (TDML) framework that leverages blockchain to coordinate remote trainers and validate workloads, achieving privacy, transparency, and efficient model training across public remote computing resources. Experimental validation demonstrates TDML's efficacy in overcoming performance limitations and malicious node detection, positioning it as a robust solution for scalable and secure distributed machine learning.
翻译:暂无翻译