Skip to main content

UVeQFed: Universal Vector Quantization for Federated Learning

Author(s): Shlezinger, Nir; Chen, Mingzhe; Eldar, Yonina C; Poor, H Vincent; Cui, Shuguang

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1m902337
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShlezinger, Nir-
dc.contributor.authorChen, Mingzhe-
dc.contributor.authorEldar, Yonina C-
dc.contributor.authorPoor, H Vincent-
dc.contributor.authorCui, Shuguang-
dc.date.accessioned2024-02-04T02:10:38Z-
dc.date.available2024-02-04T02:10:38Z-
dc.date.issued2020-12-23en_US
dc.identifier.citationShlezinger, Nir, Chen, Mingzhe, Eldar, Yonina C, Poor, H Vincent, Cui, Shuguang. (2021). UVeQFed: Universal Vector Quantization for Federated Learning. IEEE Transactions on Signal Processing, 69 (500 - 514. doi:10.1109/tsp.2020.3046971en_US
dc.identifier.issn1053-587X-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1m902337-
dc.description.abstractTraditional deep learning models are trained at a centralized server using data samples collected from users. Such data samples often include private information, which the users may not be willing to share. Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their data. FL consists of an iterative procedure, where in each iteration the users train a copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model. A major challenge that arises in this method is the need of each user to repeatedly transmit its learned model over the throughput limited uplink channel. In this work, we tackle this challenge using tools from quantization theory. In particular, we identify the unique characteristics associated with conveying trained models over rate-constrained channels, and propose a suitable quantization scheme for such settings, referred to as universal vector quantization for FL (UVeQFed). We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion. We then theoretically analyze the distortion, showing that it vanishes as the number of users grows. We also characterize how models trained with conventional federated averaging combined with UVeQFed converge to the model which minimizes the loss function. Our numerical results demonstrate the gains of UVeQFed over previously proposed methods in terms of both distortion induced in quantization and accuracy of the resulting aggregated model.en_US
dc.format.extent500 - 514en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Transactions on Signal Processingen_US
dc.rightsAuthor's manuscripten_US
dc.titleUVeQFed: Universal Vector Quantization for Federated Learningen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1109/tsp.2020.3046971-
dc.identifier.eissn1941-0476-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
UVeQFedUniversalVectorQuantization.pdf958.53 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.