TY - JOUR
T1 - Heterogeneous Differential-Private Federated Learning
T2 - Trading Privacy for Utility Truthfully
AU - Lin, Xi
AU - Wu, Jun
AU - Li, Jianhua
AU - Sang, Chao
AU - Hu, Shiyan
AU - Deen, M. Jamal
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2023/11/1
Y1 - 2023/11/1
N2 - Differential-private federated learning (DP-FL) has emerged to prevent privacy leakage when disclosing encoded sensitive information in model parameters. However, the existing DP-FL frameworks usually preserve privacy homogeneously across clients, while ignoring the different privacy attitudes and expectations. Meanwhile, DP-FL is hard to guarantee that uncontrollable clients (i.e., stragglers) have truthfully added the expected DP noise. To tackle these challenges, we propose a heterogeneous differential-private federated learning framework, named HDP-FL, which captures the variation of privacy attitudes with truthful incentives. First, we investigate the impact of the HDP noise on the theoretical convergence of FL, showing a tradeoff between privacy loss and learning performance. Then, based on the privacy-utility tradeoff, we design a contract-based incentive mechanism, which encourages clients to truthfully reveal private attitudes and contribute to learning as desired. In particular, clients are classified into different privacy preference types and the optimal privacy-price contracts in the discrete-privacy-type model and continuous-privacy-type model are derived. Our extensive experiments with real datasets demonstrate that HDP-FL can maintain satisfactory learning performance while considering different privacy attitudes, which also validate the truthfulness, individual rationality, and effectiveness of our incentives.
AB - Differential-private federated learning (DP-FL) has emerged to prevent privacy leakage when disclosing encoded sensitive information in model parameters. However, the existing DP-FL frameworks usually preserve privacy homogeneously across clients, while ignoring the different privacy attitudes and expectations. Meanwhile, DP-FL is hard to guarantee that uncontrollable clients (i.e., stragglers) have truthfully added the expected DP noise. To tackle these challenges, we propose a heterogeneous differential-private federated learning framework, named HDP-FL, which captures the variation of privacy attitudes with truthful incentives. First, we investigate the impact of the HDP noise on the theoretical convergence of FL, showing a tradeoff between privacy loss and learning performance. Then, based on the privacy-utility tradeoff, we design a contract-based incentive mechanism, which encourages clients to truthfully reveal private attitudes and contribute to learning as desired. In particular, clients are classified into different privacy preference types and the optimal privacy-price contracts in the discrete-privacy-type model and continuous-privacy-type model are derived. Our extensive experiments with real datasets demonstrate that HDP-FL can maintain satisfactory learning performance while considering different privacy attitudes, which also validate the truthfulness, individual rationality, and effectiveness of our incentives.
KW - Federated learning
KW - heterogeneous differential privacy
KW - privacy-utility tradeoff
KW - truthful incentives
UR - http://www.scopus.com/inward/record.url?scp=85148452089&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85148452089&partnerID=8YFLogxK
U2 - 10.1109/TDSC.2023.3241057
DO - 10.1109/TDSC.2023.3241057
M3 - Article
AN - SCOPUS:85148452089
SN - 1545-5971
VL - 20
SP - 5113
EP - 5129
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
IS - 6
ER -