Skip to main content

Lead federated neuromorphic learning for wireless edge artificial intelligence

Author(s): Yang, Helin; Lam, Kwok-Yan; Xiao, Liang; Xiong, Zehui; Hu, Hao; et al

To refer to this page use:
Abstract: In order to realize the full potential of wireless edge artificial intelligence (AI), very large and diverse datasets will often be required for energy-demanding model training on resource-constrained edge devices. This paper proposes a lead federated neuromorphic learning (LFNL) technique, which is a decentralized energy-efficient brain-inspired computing method based on spiking neural networks. The proposed technique will enable edge devices to exploit brain-like biophysiological structure to collaboratively train a global model while helping preserve privacy. Experimental results show that, under the situation of uneven dataset distribution among edge devices, LFNL achieves a comparable recognition accuracy to existing edge AI techniques, while substantially reducing data traffic by >3.5× and computational latency by >2.0×. Furthermore, LFNL significantly reduces energy consumption by >4.5× compared to standard federated learning with a slight accuracy loss up to 1.5%. Therefore, the proposed LFNL can facilitate the development of brain-inspired computing and edge AI.
Electronic Publication Date: 25-Jul-2022
Citation: Yang, Helin, Lam, Kwok-Yan, Xiao, Liang, Xiong, Zehui, Hu, Hao, Niyato, Dusit, Vincent Poor, H. (Lead federated neuromorphic learning for wireless edge artificial intelligence. Nature Communications, 13 (1), 10.1038/s41467-022-32020-w
DOI: doi:10.1038/s41467-022-32020-w
EISSN: 2041-1723
Language: en
Type of Material: Journal Article
Journal/Proceeding Title: Nature Communications
Version: Final published version. This is an open access article.

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.