Skip to main content

Distributed Learning in Wireless Networks: Recent Progress and Future Challenges

Author(s): Chen, Mingzhe; Gunduz, Deniz; Huang, Kaibin; Saad, Walid; Bennis, Mehdi; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr10v89h5c
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Mingzhe-
dc.contributor.authorGunduz, Deniz-
dc.contributor.authorHuang, Kaibin-
dc.contributor.authorSaad, Walid-
dc.contributor.authorBennis, Mehdi-
dc.contributor.authorFeljan, Aneta Vulgarakis-
dc.contributor.authorPoor, H Vincent-
dc.date.accessioned2024-01-21T20:02:08Z-
dc.date.available2024-01-21T20:02:08Z-
dc.date.issued2021-10-06en_US
dc.identifier.citationChen, Mingzhe, Gunduz, Deniz, Huang, Kaibin, Saad, Walid, Bennis, Mehdi, Feljan, Aneta Vulgarakis, Poor, H Vincent. (2021). Distributed Learning in Wireless Networks: Recent Progress and Future Challenges. IEEE Journal on Selected Areas in Communications, 39 (12), 3579 - 3605. doi:10.1109/jsac.2021.3118346en_US
dc.identifier.issn0733-8716-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr10v89h5c-
dc.description.abstractThe next-generation of wireless networks will enable many machine learning (ML) tools and applications to efficiently analyze various types of data collected by edge devices for inference, autonomy, and decision making purposes. However, due to resource constraints, delay limitations, and privacy challenges, edge devices cannot offload their entire collected datasets to a cloud server for centrally training their ML models or inference purposes. To overcome these challenges, distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges, thus reducing the communication overhead and latency as well as improving data privacy. However, deploying distributed learning over wireless networks faces several challenges including the uncertain wireless environment (e.g., dynamic channel and interference), limited wireless resources (e.g., transmit power and radio spectrum), and hardware resources (e.g., computational power). This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks. We present a detailed overview of several emerging distributed learning paradigms, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning. For each learning framework, we first introduce the motivation for deploying it over wireless networks. Then, we present a detailed literature review on the use of communication techniques for its efficient deployment. We then introduce an illustrative example to show how to optimize wireless networks to improve its performance. Finally, we introduce future research opportunities. In a nutshell, this paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks.en_US
dc.format.extent3579 - 3605en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Journal on Selected Areas in Communicationsen_US
dc.rightsAuthor's manuscripten_US
dc.titleDistributed Learning in Wireless Networks: Recent Progress and Future Challengesen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1109/jsac.2021.3118346-
dc.identifier.eissn1558-0008-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
2104.02151.pdf3.47 MBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.