Campuses:

Individual Sensitivity Preprocessing for Data Privacy

Thursday, June 20, 2019 - 2:00pm - 2:50pm
Lind 305
Rachel Cummings (Georgia Institute of Technology)
The sensitivity metric in differential privacy, which is informally defined as the largest marginal change in output between neighboring databases, is of substantial significance in determining the accuracy of private data analyses. Techniques for improving accuracy when the average sensitivity is much smaller than the worst-case sensitivity have been developed within the differential privacy literature, including tools such as smooth sensitivity, Sample-and-Aggregate, Propose-Test-Release, and Lipschitz extensions.

In this work, we provide a new and general Sensitivity-Preprocessing framework for reducing sensitivity, where efficient application gives state-of-the-art accuracy for privately outputting the important statistical metrics median and mean when no underlying assumptions are made about the database. In particular, our framework compares favorably to smooth sensitivity for privately outputting median, in terms of both running time and accuracy. Furthermore, because our framework is a preprocessing step, it can also be complementary to smooth sensitivity and any other private mechanism, where applying both can achieve further gains in accuracy. We additionally introduce a new notion of individual sensitivity and show that it is an important metric in the variant definition of personalized differential privacy. We show that our algorithm can extend to this context and serve as a useful tool for this variant definition and its applications in markets for privacy.

Joint work with D. Durfee