Distributed Stealthy Backdoor Attack on Federated Learning
Primary Investigator:
Feng Li
Agnideven Palanisamy Sundar, Dr. Feng Li, Dr. Xukai Zou, and Dr. Tianchong Gao
Abstract
Federated Learning provides enhanced privacy over traditional centralized learning, but it is as susceptible to backdoor attacks as its centralized counterpart. Conventionally, in Federated Learning, data poisoning-based backdoor attacks involve overlaying the same small trigger pattern, shared among all malicious clients, on multiple benign data samples during local training. At inference time, the global model misclassifies the data sample in the presence of the same trigger but acts benignly otherwise. Unlike centralized, the distributed nature of Federated Learning allows the scope to explore the possibility of using varied trigger patterns. This work focuses on building an attack scheme where each malicious client uses distinct local triggers while sharing the same attack goal. Each set of malicious clients embed a sizably discrete yet stealthy trigger pattern, while the attack can be invoked with a single small trigger pattern during global model inference. We conduct extensive experiments to show that our approach is far stealthier than the centralized trigger approach. The stealthiness of our method is effectively demonstrated by evading state-of-the-art centralized defenses. The effectiveness of our work is explained using feature visual interpretation methods like GRADCAM.