Jiacheng Du and Jiahui Hu, The State Key Laboratory of Blockchain and Data Security, Zhejiang University, P. R. China; Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security, P. R. China; and College of Computer Science and Electronic Engineering, Hunan University, P. R. China; Zhibo Wang, The State Key Laboratory of Blockchain and Data Security, Zhejiang University, P. R. China; Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security, P. R. China; Peng Sun, College of Computer Science and Electronic Engineering, Hunan University, P. R. China; Neil Gong, Department of Electrical and Computer Engineering, Duke University, USA; Kui Ren and Chun Chen, The State Key Laboratory of Blockchain and Data Security, Zhejiang University, P. R. China; Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security, P. R. China
Federated learning (FL) facilitates collaborative model training among multiple clients without raw data exposure. However, recent studies have shown that clients' private training data can be reconstructed from shared gradients in FL, a vulnerability known as gradient inversion attacks (GIAs). While GIAs have demonstrated effectiveness under ideal settings and auxiliary assumptions, their actual efficacy against practical FL systems remains under-explored. To address this gap, we conduct a comprehensive study on GIAs in this work. We start with a survey of GIAs that establishes a timeline to trace their evolution and develops a systematization to uncover their inherent threats. By rethinking GIA in practical FL systems, three fundamental aspects influencing GIA's effectiveness are identified: training setup, model, and post-processing. Guided by these aspects, we perform extensive theoretical and empirical evaluations of SOTA GIAs across diverse settings. Our findings highlight that GIA is notably constrained, fragile, and easily defensible. Specifically, GIAs exhibit inherent limitations against practical local training settings. Additionally, their effectiveness is highly sensitive to the trained model, and even simple post-processing techniques applied to gradients can serve as effective defenses. Our work provides crucial insights into the limited threats of GIAs in practical FL systems. By rectifying prior misconceptions, we hope to inspire more accurate and realistic investigations on this topic.
Open Access Media
USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.