Locally Differentially Private Bayesian Inference

Tejas Kulkarni, Joonas Jälkö, Samuel Kaski, Antti Honkela

Research output: Contribution to conferencePaperpeer-review

Abstract

In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy. LDP provides client-side privacy by adding noise at the user's end. Thus, clients need not rely on the trustworthiness of the aggregator.
In this work, we provide a noise-aware probabilistic modeling framework, which allows Bayesian inference to take into account the noise added for privacy under LDP, conditioned on locally perturbed observations. Stronger privacy protection (compared to the central model) provided by LDP protocols comes at a much harsher privacy-utility trade-off. Our framework tackles several computational and statistical challenges posed by LDP for accurate uncertainty quantification under Bayesian settings. We demonstrate the efficacy of our framework in parameter estimation for univariate and multi-variate distributions as well as logistic and linear regression.
Original languageEnglish
Number of pages21
Publication statusPublished - 2022

Keywords

  • Machine learning
  • Cryptography and Security

Fingerprint

Dive into the research topics of 'Locally Differentially Private Bayesian Inference'. Together they form a unique fingerprint.

Cite this