DeepLeak

Jan 1, 2026·
Firas Ben Hmida
Firas Ben Hmida
Zain Sbeih
Zain Sbeih
Philemon Hailemariam
Philemon Hailemariam
Birhanu Eshete
Birhanu Eshete
· 1 min read
projects

Project Snapshot

Item Details
Paper DeepLeak: Privacy Enhancing Hardening of Model Explanations Against Membership Leakage
Venue IEEE SaTML 2026
Primary Theme Privacy-preserving explainability
Main Artifacts Codebase + dataset package

Authors

  • Firas Ben Hmida
  • Zain Sbeih
  • Philemon Hailemariam
  • Birhanu Eshete

Overview

DeepLeak studies the privacy risks of post-hoc explanation methods and provides mitigation strategies that reduce membership leakage while preserving explanation utility. The project focuses on practical, model-agnostic hardening that can be applied in high-stakes ML deployments.

What This Project Delivers

  • Explanation-aware leakage auditing across multiple explanation families.
  • Hardening strategies including attribution clipping, masking, and calibrated noise.
  • Reproducible artifacts to evaluate privacy/utility tradeoffs under consistent settings.

Repository and Paper

Firas Ben Hmida
Authors
Firas Ben Hmida
PhD Candidate
Zain Sbeih
Authors
Zain Sbeih
Co-Founder, Rewixx
B.Sc., 2024-2025, University of Michigan-Dearborn.
Philemon Hailemariam
Authors
Philemon Hailemariam
PhD Candidate
Birhanu Eshete
Authors
Birhanu Eshete
Principal Investigator
Associate Professor of Computer Science at the University of Michigan-Dearborn and Director of the Data-Driven Security & Privacy Lab (DSPLab).