Secure Federated Learning: Attacks and Defenses

Loading...
Thumbnail Image

Date

2022

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

68
views
106
downloads

Abstract

Federated learning is an emerging distributed machine learning paradigm that enables multiple clients to collaboratively learn a global model with the help of a server. Unfortunately, federated learning is vulnerable to poisoning attacks. Specifically, an attacker who controls some malicious clients can poison the global model via manipulating the local training data or local model updates on them. The corrupted global model will make wrong predictions, raising concerns about the security of federated learning.

In this dissertation, we study the poisoning attacks and defenses in federated learning. We first propose a general framework for untargeted model poisoning attacks called LMPA, which solves an optimization problem to manipulate the local model updates. We then demonstrate how to apply our attack framework to existing Byzantine-robust federated learning methods. Our evaluations show that existing Byzantine-robust defenses are not secure against sophisticated attacks, highlighting the need for new defenses.

Then we turn to the problem of designing new defenses against poisoning attacks. Existing Byzantine-robust defenses are insufficient because they have no root of trust. In other words, from the server's perspective, every client could be malicious. Therefore, we propose FLTrust that bootstraps trust to the clients. Specifically, in FLTrust, the server collects a small clean root dataset to compute a server model update in each round of federated learning, and assigns trust scores to the local model updates based on their similarity to the server model update. The server also normalizes the magnitudes of local model updates before taking their average weighted by the trust scores. We empirically show that FLTrust is robust against existing poisoning attacks, as well as an adaptive attack.

We note that there is an ongoing arms race between the attacks and defenses in federated learning. Though we have empirically demonstrated the robustness of FLTrust against different attacks, we cannot guarantee its security against future attacks. One way to end the arms race is to design defenses that have provable security guarantee. In the third part of this dissertation, we propose an ensemble-based federated learning method called FLCert, which we prove to be secure when the number of malicious clients is bounded. Our guarantee is regardless of the attacks, and thus will still hold for future attacks once the number of malicious clients does not exceed certain threshold.

Description

Provenance

Citation

Citation

Cao, Xiaoyu (2022). Secure Federated Learning: Attacks and Defenses. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/26785.

Collections


Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.