While contrastive learning (CL) shows considerable promise in self-supervised representation learning, its deployment on resource-constrained devices remains largely underexplored. The substantial computational demands required for training conventional CL frameworks pose a set of challenges, particularly in terms of energy consumption, data availability, and memory usage. We conduct an evaluation of four widely used CL frameworks: SimCLR, MoCo, SimSiam, and Barlow Twins. We focus on the practical feasibility of these CL frameworks for edge and fog deployment, and introduce a systematic benchmarking strategy that includes energy profiling and reduced training data conditions. Our findings reveal that SimCLR, contrary to its perceived computational cost, demonstrates the lowest energy consumption across various data regimes. Finally, we also extend our analysis by evaluating lightweight neural architectures when paired with CL frameworks. Our study aims to provide insights into the resource implications of deploying CL in edge/fog environments with limited processing capabilities and opens several research directions for its future optimization.
翻译:尽管对比学习在自监督表示学习中展现出巨大潜力,但其在资源受限设备上的部署仍鲜有探索。传统对比学习框架训练所需的巨大计算需求带来了一系列挑战,特别是在能耗、数据可用性和内存使用方面。我们对四种广泛使用的对比学习框架(SimCLR、MoCo、SimSiam和Barlow Twins)进行了评估,重点关注这些框架在边缘和雾计算部署中的实际可行性,并引入包含能耗分析和有限训练数据条件的系统性基准测试策略。研究发现,SimCLR与其感知计算成本相反,在不同数据规模下均表现出最低的能耗水平。最后,我们还通过评估轻量级神经网络架构与对比学习框架的结合效果来扩展分析。本研究旨在为在处理能力有限的边缘/雾环境中部署对比学习的资源影响提供见解,并为其未来优化开辟若干研究方向。