Skip to main content

Revisiting Analog Over-the-Air Machine Learning: The Blessing and Curse of Interference

Author(s): Yang, Howard H; Chen, Zihan; Quek, Tony QS; Poor, H Vincent

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr11n7xm9m
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYang, Howard H-
dc.contributor.authorChen, Zihan-
dc.contributor.authorQuek, Tony QS-
dc.contributor.authorPoor, H Vincent-
dc.date.accessioned2024-02-03T02:27:26Z-
dc.date.available2024-02-03T02:27:26Z-
dc.date.issued2021-12-31en_US
dc.identifier.citationYang, Howard H, Chen, Zihan, Quek, Tony QS, Poor, H Vincent. (2022). Revisiting Analog Over-the-Air Machine Learning: The Blessing and Curse of Interference. IEEE Journal of Selected Topics in Signal Processing, 16 (3), 406 - 419. doi:10.1109/jstsp.2021.3139231en_US
dc.identifier.issn1932-4553-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr11n7xm9m-
dc.description.abstractWe study a distributed machine learning problem carried out by an edge server and multiple agents in a wireless network. The objective is to minimize a global function that is a sum of the agents’ local loss functions. And the optimization is conducted by analog over-the-air model training. Specifically, each agent modulates its local gradient onto a set of waveforms and transmits to the edge server simultaneously. From the received analog signal the edge server extracts a noisy aggregated gradient which is distorted by the channel fading and interference, and uses it to update the global model and feedbacks to all the agents for another round of local computing. Since the electromagnetic interference generally exhibits a heavy-tailed intrinsic, we use the $\alpha$ -stable distribution to model its statistic. In consequence, the global gradient has an infinite variance that hinders the use of conventional techniques for convergence analysis that rely on second-order moments’ existence. To circumvent this challenge, we take a new route to establish the analysis of convergence rate, as well as generalization error, of the algorithm. We also show that the training algorithm can be run in tandem with the momentum scheme to accelerate the convergence. Our analyses reveal a two-sided effect of the interference on the overall training procedure. On the negative side, heavy tail noise slows down the convergence rate of the model training: the heavier the tail in the distribution of interference, the slower the algorithm converges. On the positive side, heavy tail noise has the potential to increase the generalization power of the trained model: the heavier the tail, the better the model generalizes. This perhaps counterintuitive conclusion implies that the prevailing thinking on interference – that it is only detrimental to the edge learning system – is outdated and we shall seek new techniques that exploit, rather than simply mitigate, the interference for better machine learning in wireless networks.en_US
dc.format.extent406 - 419en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Journal of Selected Topics in Signal Processingen_US
dc.rightsAuthor's manuscripten_US
dc.titleRevisiting Analog Over-the-Air Machine Learning: The Blessing and Curse of Interferenceen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1109/jstsp.2021.3139231-
dc.identifier.eissn1941-0484-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
2107.11733.pdf454.81 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.