TY - JOUR
T1 - DNN-DP
T2 - Differential Privacy Enabled Deep Neural Network Learning Framework for Sensitive Crowdsourcing Data
AU - Wang, Yufeng
AU - Gu, Min
AU - Ma, Jianhua
AU - Jin, Qun
N1 - Funding Information:
Manuscript received July 1, 2019; revised September 21, 2019 and October 22, 2019; accepted October 24, 2019. Date of publication November 21, 2019; date of current version February 24, 2020. This work was supported by the Qinglan Project of Jiangsu Province. (Corresponding author: Yufeng Wang.) Y. Wang and M. Gu are with the College of Telecommunications and information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210023, China (e-mail: wfwang@njupt.edu.cn).
Publisher Copyright:
© 2014 IEEE.
PY - 2020/2
Y1 - 2020/2
N2 - Deep neural network (DNN) learning has witnessed significant applications in various fields, especially for prediction and classification. Frequently, the data used for training are provided by crowdsourcing workers, and the training process may violate their privacy. A qualified prediction model should protect the data privacy in training and classification/prediction phases. To address this issue, we develop a differential privacy (DP)-enabled DNN learning framework, DNN-DP, that intentionally injects noise to the affine transformation of the input data features and provides DP protection for the crowdsourced sensitive training data. Specifically, we correspondingly estimate the importance of each feature related to target categories and follow the principle that less noise is injected into the more important feature to ensure the data utility of the model. Moreover, we design an adaptive coefficient for the added noise to accommodate the heterogeneous feature value ranges. Theoretical analysis proves that DNN-DP preserves ${\varepsilon }$ -differentially private in the computation. Moreover, the simulation based on the US Census data set demonstrates the superiority of our method in predictive accuracy compared with other existing privacy-aware machine learning methods.
AB - Deep neural network (DNN) learning has witnessed significant applications in various fields, especially for prediction and classification. Frequently, the data used for training are provided by crowdsourcing workers, and the training process may violate their privacy. A qualified prediction model should protect the data privacy in training and classification/prediction phases. To address this issue, we develop a differential privacy (DP)-enabled DNN learning framework, DNN-DP, that intentionally injects noise to the affine transformation of the input data features and provides DP protection for the crowdsourced sensitive training data. Specifically, we correspondingly estimate the importance of each feature related to target categories and follow the principle that less noise is injected into the more important feature to ensure the data utility of the model. Moreover, we design an adaptive coefficient for the added noise to accommodate the heterogeneous feature value ranges. Theoretical analysis proves that DNN-DP preserves ${\varepsilon }$ -differentially private in the computation. Moreover, the simulation based on the US Census data set demonstrates the superiority of our method in predictive accuracy compared with other existing privacy-aware machine learning methods.
KW - Adaptive noise
KW - crowdsourcing data
KW - deep neural network (DNN)
KW - differential privacy (DP)
UR - http://www.scopus.com/inward/record.url?scp=85076174017&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85076174017&partnerID=8YFLogxK
U2 - 10.1109/TCSS.2019.2950017
DO - 10.1109/TCSS.2019.2950017
M3 - Article
AN - SCOPUS:85076174017
SN - 2329-924X
VL - 7
SP - 215
EP - 224
JO - IEEE Transactions on Computational Social Systems
JF - IEEE Transactions on Computational Social Systems
IS - 1
M1 - 8909376
ER -