Click here to flash read.
Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method
for applying privacy in the training of deep learning models. This applies
isotropic Gaussian noise to gradients during training, which can perturb these
gradients in any direction, damaging utility. Metric DP, however, can provide
alternative mechanisms based on arbitrary metrics that might be more suitable
for preserving utility. In this paper, we apply \textit{directional privacy},
via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb
gradients in terms of \textit{angular distance} so that gradient direction is
broadly preserved. We show that this provides both $\epsilon$-DP and $\epsilon
d$-privacy for deep learning training, rather than the $(\epsilon,
\delta)$-privacy of the Gaussian mechanism; we observe that the $\epsilon
d$-privacy guarantee does not require a $\delta>0$ term but degrades smoothly
according to the dissimilarity of the input gradients.
As $\epsilon$s between these different frameworks cannot be directly
compared, we examine empirical privacy calibration mechanisms that go beyond
previous work on empirically calibrating privacy within standard DP frameworks
using membership inference attacks (MIA); we show that a combination of
enhanced MIA and reconstruction attacks provides a suitable method for privacy
calibration. Experiments on key datasets then indicate that the VMF mechanism
can outperform the Gaussian in the utility-privacy trade-off. In particular,
our experiments provide a direct comparison of privacy between the two
approaches in terms of their ability to defend against reconstruction and
membership inference.
No creative common's license