Privacy-Preserving ML

DeepLeak: Privacy Enhancing Hardening of Model Explanations Against Membership Leakage

Machine learning (ML) explainability is central to algorithmic transparency in high-stakes settings such as predictive diagnostics and loan approval. Yet these same domains demand …

avatar
Firas Ben Hmida

DeepLeak

Privacy hardening for explanation methods against membership inference leakage.

avatar
Firas Ben Hmida

MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members

In membership inference attacks (MIAs), an adversary observes the predictions of a model to determine whether a sample is part of the model's training data. Existing MIA defenses …

avatar
Ismat Jarin

DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning

Differential Privacy (DP) has emerged as a rigorous formalism to reason about quantifiable privacy leakage. In machine learning (ML), DP has been employed to limit …

avatar
Ismat Jarin

PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party Setting

When multiple parties that deal with private data aim for a collaborative prediction task such as medical image classification, they are often constrained by data protection …

avatar
Ismat Jarin