Academic Journal of Computing & Information Science, 2024, 7(5); doi: 10.25236/AJCIS.2024.070509.
Yifei Ding
School of Science and Technology, Hunan University of Science and Technology, Xiangtan, 411201, China
The federated self-supervision framework can solve the problem of large amounts of unlabeled data in traditional federated learning and achieves good results in the entire learning process. However, self-supervised learning increases communication overhead. The limitation of communication capabilities increases the time cost of training and also affects the convergence and accuracy of the model. This paper proposes a method that combines quantization and threshold adaptive aggregation (Adaptive Lazily Aggregate Quantization, ALAQ) to reduce the communication overhead of the federated self-supervised framework. Experimental results prove that ALAQ can effectively reduce the number of communication bits and communication rounds between the client and the server in the federal self- supervision framework. Achieved the purpose of reducing communication overhead.
Federated Learning, Communication Optimization, Federated Self-Supervision, Gradient Compression
Yifei Ding. Adaptive Lazily Aggregated Quantized Algorithm Based on Federated Self-Supervision. Academic Journal of Computing & Information Science (2024), Vol. 7, Issue 5: 72-78. https://doi.org/10.25236/AJCIS.2024.070509.
[1] McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Artificial intelligence and statistics. PMLR, 2017: 1273-1282.
[2] Chen X, He K. Exploring simple siamese representation learning[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021: 15750-15758.
[3] Fang P F, Li X, Yan Y, et al. Connecting the Dots in Self-Supervised Learning: A Brief Survey for Beginners [J]. Journal of Computer Science and Technology, 2022, 37(3): 507-526.
[4] Msechu E J, Giannakis G B. Sensor-centric data reduction for estimation with WSNs via censoring and quantization[J]. IEEE Transactions on Signal Processing, 2011, 60(1): 400-414.
[5] Bernstein J, Wang Y X, Azizzadenesheli K, et al. signSGD: Compressed optimisation for non-convex problems[C]//International Conference on Machine Learning. PMLR, 2018: 560-569.
[6] Qu Z, Zhou Z, Cheng Y, et al. Adaptive loss-aware quantization for multi-bit networks [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 7988-7997.
[7] Sun J, Chen T, Giannakis G B,et al.Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients.2019[2024-03-25].DOI:10.48550/arXiv.1909.07588.
[8] Chen T, Giannakis G B, Sun T,et al.LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning[J]. 2018.DOI:10.48550/arXiv.1805.09965.
[9] Xu W, Fang W, Ding Y,et al.Accelerating Federated Learning for IoT in Big Data Analytics With Pruning, Quantization and Selective Updating[J].IEEE Access, 2021, PP(99):1-1.DOI:10.1109/ ACCESS. 2021.3063291.
[10] Kolesnikov A, Zhai X, Beyer L. Revisiting self-supervised visual representation learning[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 1920-1929.
[11] Beyer L, Zhai X, Oliver A,et al.S4L: Self-Supervised Semi-Supervised Learning[C]//International Conference on Computer Vision.0[2024-03-25].DOI:10.1109/ICCV.2019.00156.