Skip to main content

On Safeguarding Privacy and Security in the Framework of Federated Learning

Author(s): Ma, Chuan; Li, Jun; Ding, Ming; Yang, Howard H; Shu, Feng; et al

Download
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr1t14tp8k
Abstract: Motivated by the advancing computational capacity of wireless end-user equipment (UE), as well as the increasing concerns about sharing private data, a new machine learning (ML) paradigm has emerged, namely federated learning (FL). Specifically, FL allows a decoupling of data provision at UEs and ML model aggregation at a central unit. By training model locally, FL is capable of avoiding direct data leakage from the UEs, thereby preserving privacy and security to some extent. However, even if raw data are not disclosed from UEs, an individual's private information can still be extracted by some recently discovered attacks against the FL architecture. In this work, we analyze the privacy and security issues in FL, and discuss several challenges to preserving privacy and security when designing FL systems. In addition, we provide extensive simulation results to showcase the discussed issues and possible solutions.
Publication Date: 27-Mar-2020
Citation: Ma, Chuan, Li, Jun, Ding, Ming, Yang, Howard H, Shu, Feng, Quek, Tony QS, Poor, H Vincent. (2020). On Safeguarding Privacy and Security in the Framework of Federated Learning. IEEE Network, 34 (4), 242 - 248. doi:10.1109/mnet.001.1900506
DOI: doi:10.1109/mnet.001.1900506
ISSN: 0890-8044
EISSN: 1558-156X
Pages: 242 - 248
Type of Material: Journal Article
Journal/Proceeding Title: IEEE Network
Version: Author's manuscript



Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.