Skip to main content
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1n873033
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNguyen, Hung T-
dc.contributor.authorSehwag, Vikash-
dc.contributor.authorHosseinalipour, Seyyedali-
dc.contributor.authorBrinton, Christopher G-
dc.contributor.authorChiang, Mung-
dc.contributor.authorVincent Poor, H-
dc.date.accessioned2024-02-04T01:48:14Z-
dc.date.available2024-02-04T01:48:14Z-
dc.date.issued2020-11-09en_US
dc.identifier.citationNguyen, Hung T, Sehwag, Vikash, Hosseinalipour, Seyyedali, Brinton, Christopher G, Chiang, Mung, Vincent Poor, H. (2021). Fast-Convergent Federated Learning. IEEE Journal on Selected Areas in Communications, 39 (1), 201 - 218. doi:10.1109/jsac.2020.3036952en_US
dc.identifier.issn0733-8716-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1n873033-
dc.description.abstractFederated learning has emerged recently as a promising solution for distributing machine learning tasks through modern networks of mobile devices. Recent studies have obtained lower bounds on the expected decrease in model loss that is achieved through each round of federated learning. However, convergence generally requires a large number of communication rounds, which induces delay in model training and is costly in terms of network resources. In this paper, we propose a fast-convergent federated learning algorithm, called FOLB , which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed. We first theoretically characterize a lower bound on improvement that can be obtained in each round if devices are selected according to the expected improvement their local models will provide to the current global model. Then, we show that FOLB obtains this bound through uniform sampling by weighting device updates according to their gradient information. FOLB is able to handle both communication and computation heterogeneity of devices by adapting the aggregations according to estimates of device’s capabilities of contributing to the updates. We evaluate FOLB in comparison with existing federated learning algorithms and experimentally show its improvement in trained model accuracy, convergence speed, and/or model stability across various machine learning tasks and datasets.en_US
dc.format.extent201 - 218en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Journal on Selected Areas in Communicationsen_US
dc.rightsAuthor's manuscripten_US
dc.titleFast-Convergent Federated Learningen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1109/jsac.2020.3036952-
dc.identifier.eissn1558-0008-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
2007.13137.pdf2.13 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.