Skip to main content

Federated Learning With Differential Privacy: Algorithms and Performance Analysis

Author(s): Wei, Kang; Li, Jun; Ding, Ming; Ma, Chuan; Yang, Howard H; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1j96094j
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWei, Kang-
dc.contributor.authorLi, Jun-
dc.contributor.authorDing, Ming-
dc.contributor.authorMa, Chuan-
dc.contributor.authorYang, Howard H-
dc.contributor.authorFarokhi, Farhad-
dc.contributor.authorJin, Shi-
dc.contributor.authorQuek, Tony QS-
dc.contributor.authorVincent Poor, H-
dc.date.accessioned2024-02-05T01:18:50Z-
dc.date.available2024-02-05T01:18:50Z-
dc.date.issued2020-04-17en_US
dc.identifier.citationWei, Kang, Li, Jun, Ding, Ming, Ma, Chuan, Yang, Howard H, Farokhi, Farhad, Jin, Shi, Quek, Tony QS, Vincent Poor, H. (2020). Federated Learning With Differential Privacy: Algorithms and Performance Analysis. IEEE Transactions on Information Forensics and Security, 15 (3454 - 3469. doi:10.1109/tifs.2020.2988575en_US
dc.identifier.issn1556-6013-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/pr1j96094j-
dc.description.abstractFederated learning (FL), as a type of distributed machine learning, is capable of significantly preserving clients’ private data from being exposed to adversaries. Nevertheless, private information can still be divulged by analyzing uploaded parameters from clients, e.g., weights trained in deep neural networks. In this paper, to effectively prevent information leakage, we propose a novel framework based on the concept of differential privacy (DP), in which artificial noise is added to parameters at the clients’ side before aggregating, namely, noising before model aggregation FL (NbAFL). First, we prove that the NbAFL can satisfy DP under distinct protection levels by properly adapting different variances of artificial noise. Then we develop a theoretical convergence bound on the loss function of the trained FL model in the NbAFL. Specifically, the theoretical bound reveals the following three key properties: 1) there is a tradeoff between convergence performance and privacy protection levels, i.e., better convergence performance leads to a lower protection level; 2) given a fixed privacy protection level, increasing the number $N$ of overall clients participating in FL can improve the convergence performance; and 3) there is an optimal number aggregation times (communication rounds) in terms of convergence performance for a given protection level. Furthermore, we propose a $K$ -client random scheduling strategy, where $K$ ( $1\leq K< N$ ) clients are randomly selected from the $N$ overall clients to participate in each aggregation. We also develop a corresponding convergence bound for the loss function in this case and the $K$ -client random scheduling strategy also retains the above three properties. Moreover, we find that there is an optimal $K$ that achieves the best convergence performance at a fixed privacy level. Evaluations demonstrate that our theoretical results are consistent with simulations, thereby facilitating the design of various privacy-preserving FL algorithms with different tradeoff requirements on convergence performance and privacy levels.en_US
dc.format.extent3454 - 3469en_US
dc.language.isoen_USen_US
dc.relation.ispartofIEEE Transactions on Information Forensics and Securityen_US
dc.rightsAuthor's manuscripten_US
dc.titleFederated Learning With Differential Privacy: Algorithms and Performance Analysisen_US
dc.typeJournal Articleen_US
dc.identifier.doidoi:10.1109/tifs.2020.2988575-
dc.identifier.eissn1556-6021-
pu.type.symplectichttp://www.symplectic.co.uk/publications/atom-terms/1.0/journal-articleen_US

Files in This Item:
File Description SizeFormat 
1911.00222.pdf501.88 kBAdobe PDFView/Download


Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.