Vaccination Against Backdoor Attack on Federated Learning Systems
Primary Investigator:
Feng Li
Agnideven Palanisamy Sundar, Feng Li, Xukai Zou, Tianchong Gao, Ryan Hosler
Abstract
Federated Learning (FL) is becoming widely adopted as it improves the model performance of participants with non-voluminous training data. But, existing FL methods are highly susceptible to byzantine and backdoor attacks. Unlike byzantine attacks, which can be detected and prevented with relative ease, the backdoor attacks are far more notorious and manage to infiltrate the participants' local models, affecting a subset of operations. Existing defense mechanisms rely entirely on the trusted third-party server to handle all such attacks. Placing such high trust in a third party's ability to tackle backdoors takes significant control away from the participant. Moreover, if the server fails to catch the backdoor attack or if the server acts maliciously, the attack invades and affects all the participants. In this paper, we propose a vaccination-based technique, which gives the participants stronger control over their models. Irrespective of the central server's ability to prevent a backdoor attack, the participants can tackle such attacks to a significant extent. Our vaccination method is one of the few defense methods that can be executed entirely on the client end with minimal computation overhead and zero communication overhead. Moreover, our method can be combined with existing server-based defense to boost performance. We experimentally show how our simple yet effective vaccination method can efficiently prevent the most commonly applied backdoor attacks while maintaining high main task accuracy.