Data-Driven Security & Privacy Lab (DSPLab)

We build trustworthy AI systems that are robust, privacy-preserving, and auditable. Our work sits at the intersection of AI security, adversarial machine learning, provenance, and deployable safety.
Latest Lab News

ACM SIGSAC Featured My Work on LinkedIn

I was featured by the ACM Special Interest Group on Security, Audit, and Control (SIGSAC) on LinkedIn.

avatar
Birhanu Eshete

DeepLeak Accepted at IEEE SaTML

Our latest paper "DeepLeak: Privacy Enhancing Hardening of Model Explanations Against Membership Leakage" is accepted to the IEEE Conference on Secure and Trustworthy Machine …

avatar
Birhanu Eshete

Keynote at IEEE SaTC 2026

I will be giving a keynote at the IEEE Conference on Secure and Trustworthy CyberInfrastructure for IoT and Microelectronics (SaTC 2026) in Houston, TX.

avatar
Birhanu Eshete

Research Areas

Trustworthy AI Across the Lifecycle

Our work spans AI security, privacy, safety, explainability, and ethical norms.

Active

AI Security

Hardening AI against backdoors, data poisoning, evasion, and model-stealing threats.

Backdoor Detection Poisoning Defense Adversarial Robustness
Active

AI Privacy

Protecting training data and model internals from inference attacks and sensitive leakage.

Membership Inference Leakage Measurement Privacy-Preserving ML
Active

AI Safety

Ensuring models behave safely and predictably in high-stakes and adversarial settings.

Unsafe Behavior Detection Assurance Mechanisms Reliable Deployment
Active

Explainability and Accountability

Building interpretable and auditable AI where decisions can be traced and validated.

Explanation Robustness Decision Traceability Forensic Analysis
Active

Ethical and Responsible AI

Embedding responsible deployment norms across the AI lifecycle in critical systems.

Responsible AI Practice Ethical Norms Accountability-by-Design
Provenance-Centric AI Security and Safety
DSPLab research advances a new paradigm for securing and validating artificial intelligence systems: Provenance-Centric AI Security and Safety. As AI systems increasingly influence high-stakes domains such as cybersecurity, finance, healthcare, and autonomous systems, the core challenge is no longer only improving accuracy but ensuring that AI systems remain robust, safe, trustworthy, and accountable throughout their lifecycle. We argue that the key to achieving this lies in provenance: understanding and tracing the origins, lineage of transformations, and influence pathways that shape a model’s behavior. Traditional AI evaluation relies largely on black-box testing, observing outputs without visibility into the internal processes that produced them, which leaves critical blind spots against threats such as data poisoning, backdoor attacks, adversarial manipulation, and unsafe or unintended model behaviors. Our work introduces fine-grained observability into the AI pipeline by tracking the lifecycle history of data, training dynamics, parameter updates, and inference-time information flows. Through this provenance-centric lens, we develop the theoretical foundations, algorithms, and systems that make robustness and safety measurable, explainable, and auditable, enabling capabilities such as attack detection, forensic analysis, accountability, and automated model repair. Our vision is to establish provenance as a foundational layer for AI security and AI safety, transforming AI from opaque systems into observable and auditable infrastructures where model decisions can be traced, inspected, and verified, enabling responsible deployment of advanced AI in critical societal systems.
Flagship Projects

Open-source systems and reproducible artifacts from recent lab work.

DeepLeak

Privacy hardening for explanation methods against membership inference leakage.

avatar
Firas Ben Hmida

DeepProv

Inference provenance graph analysis for behavioral diagnosis and targeted DNN repair.

avatar
Firas Ben Hmida

PoisonSpot

Fine-grained training provenance tracking to detect clean-label backdoor poisoning.

avatar
Philemon Hailemariam
Featured Publications

Recent papers from DSPLab at flagship security and trustworthy AI venues.

DeepLeak: Privacy Enhancing Hardening of Model Explanations Against Membership Leakage

Machine learning (ML) explainability is central to algorithmic transparency in high-stakes settings such as predictive diagnostics and loan approval. Yet these same domains demand …

avatar
Firas Ben Hmida

DeepProv: Behavioral Characterization and Repair of Neural Networks via Inference Provenance Graph Analysis

Deep neural networks (DNNs) are increasingly being deployed in high-stakes applications, from self-driving cars to biometric authentication. However, their unpredictable and …

avatar
Firas Ben Hmida

PoisonSpot: Precise Spotting of Clean-Label Backdoors via Fine-Grained Training Provenance Tracking

Relying on untrusted data exposes machine learning models to backdoor attacks, where adversaries poison training data to embed hidden behaviors. Existing defenses struggle against …

avatar
Philemon Hailemariam
Meet the Team

Researchers Building Trustworthy AI

We are a collaborative group of researchers working across AI security, privacy, and dependable deployment.

Principal Investigator

Birhanu Eshete

Birhanu Eshete

Principal Investigator

University of Michigan-Dearborn

Associate Professor of Computer Science at the University of Michigan-Dearborn and Director of the Data-Driven Security & Privacy Lab (DSPLab).

AI Trustworthiness AI Security Privacy-Preserving ML

PhD Researchers

Firas Ben Hmida

Firas Ben Hmida

PhD Candidate

University of Michigan-Dearborn

Philemon Hailemariam

Philemon Hailemariam

PhD Candidate

University of Michigan-Dearborn

MS Researchers

A

Alistair Clarke

Master's Student

2025-present, Master's Student.

Kashif Ali Khan

Kashif Ali Khan

Master's Student

University of Michigan-Dearborn

Alumni

Abe Amich

Abe Amich

R&D ML Engineer for Cybersecurity, Sandbox AQ

Ph.D., 2019-2024, University of Michigan-Dearborn.

Elie Rizk

Elie Rizk

AI Engineer, Siren Analytics

M.Sc., 2023-2024, University of Michigan-Dearborn.

Poornaditya Mishra

Poornaditya Mishra

AI Alchemist, Miracle Labs

M.Sc., 2024, University of Michigan-Dearborn.

Zain Sbeih

Zain Sbeih

Co-Founder, Rewixx

B.Sc., 2024-2025, University of Michigan-Dearborn.

Christine Carlton

Christine Carlton

Network Monitoring and Observability Manager, Ford Motor Company

M.Sc., 2023, University of Michigan-Dearborn.

Ata Kaboudi

Ata Kaboudi

Software Engineer, CBRE Investment Management

M.Sc., 2023, University of Michigan-Dearborn.

Jon-Nicklaus Jackson

Jon-Nicklaus Jackson

IT Security and Compliance Analyst, Bosch USA

M.Sc., 2023, University of Michigan-Dearborn.

Hassaan Ali

Hassaan Ali

Senior Software Engineer, Tesla

M.Sc., 2023, University of Michigan-Dearborn.

Ismat Jarin

Ismat Jarin

Ph.D. Student, UC Irvine

Ph.D., 2019-2022 (DNF), University of Michigan-Dearborn.

Chevy Pawlik

Chevy Pawlik

IT Security Analyst, Auto-Owners Insurance

B.Sc., 2022, University of Michigan-Dearborn.

Olajide David

Olajide David

HPC Engineer, Gilead Sciences

M.Sc., 2022, University of Michigan-Dearborn.

Hassan Ali

Hassan Ali

Senior Software Engineer, General Motors

M.Sc., 2022, University of Michigan-Dearborn.

Majed Chamseddine

Majed Chamseddine

Security Engineer, Amazon

M.Sc., 2021, University of Michigan-Dearborn.

Abdullah Ali

Abdullah Ali

Software Engineer

M.Sc., 2019, University of Michigan-Dearborn.

Youssef Aydi

Youssef Aydi

M.S. Student, University of Michigan-Dearborn

B.Sc., 2024, University of Michigan-Dearborn.

Zeineb Moalla

Zeineb Moalla

M.S. Student, University of Michigan-Dearborn

B.Sc., 2022, University of Michigan-Dearborn.

Join DSPLab

Open positions are not currently available. We welcome inquiries from students interested in future opportunities in trustworthy AI, AI security, and privacy.

Start the Conversation

We are always open to discussing new projects, opportunities, or just having a chat.

Visit Us

Data-Driven Security & Privacy Lab (DSPLab)

Department of Computer and Information Science

University of Michigan-Dearborn

Dearborn, MI, USA

Office Hours

By appointment

View on Map

Connect

Find US ON

Prospective Students

Open positions are not currently available. You can still share your CV and interests if you would like to be considered for future opportunities.

Email Prof. Eshete