Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile

Tyler Leblond, Joseph Munoz, Fred Lu, Maya Fuchs, Elliot Zaresky-Williams, Edward Raff, Brian Testa

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Differential privacy (DP) is the prevailing technique for protecting user data in machine learning models. However, deficits to this framework include a lack of clarity for selecting the privacy budget ϵ and a lack of quantification for the privacy leakage for a particular data row by a particular trained model. We make progress toward these limitations and a new perspective by which to visualize DP results by studying a privacy metric that quantifies the extent to which a model trained on a dataset using a DP mechanism is ''covered'' by each of the distributions resulting from training on neighboring datasets. We connect this coverage metric to what has been established in the literature and use it to rank the privacy of individual samples from the training set in what we call a privacy profile. We additionally show that the privacy profile can be used to probe an observed transition to indistinguishability that takes place in the neighboring distributions as ϵ decreases, which we suggest is a tool that can enable the selection of ϵ by the ML practitioner wishing to make use of DP.

Original languageEnglish
Title of host publicationAISec 2023 - Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security
PublisherAssociation for Computing Machinery, Inc
Pages23-33
Number of pages11
ISBN (Electronic)9798400702600
DOIs
StatePublished - Nov 30 2023
Externally publishedYes
Event16th ACM Workshop on Artificial Intelligence and Security, AISec 2023, co-located with CCS 2023 - Copenhagen, Denmark
Duration: Nov 30 2023 → …

Publication series

NameAISec 2023 - Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security

Conference

Conference16th ACM Workshop on Artificial Intelligence and Security, AISec 2023, co-located with CCS 2023
Country/TerritoryDenmark
CityCopenhagen
Period11/30/23 → …

Keywords

  • differential privacy
  • machine unlearning

Fingerprint

Dive into the research topics of 'Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile'. Together they form a unique fingerprint.

Cite this