ACM SIGSAC Featured My Work on LinkedIn
I was featured by the ACM Special Interest Group on Security, Audit, and Control (SIGSAC) on LinkedIn.
I was featured by the ACM Special Interest Group on Security, Audit, and Control (SIGSAC) on LinkedIn.
Our latest paper "DeepLeak: Privacy Enhancing Hardening of Model Explanations Against Membership Leakage" is accepted to the IEEE Conference on Secure and Trustworthy Machine …
I will be giving a keynote at the IEEE Conference on Secure and Trustworthy CyberInfrastructure for IoT and Microelectronics (SaTC 2026) in Houston, TX.
Trustworthy AI Across the Lifecycle
Our work spans AI security, privacy, safety, explainability, and ethical norms.
Hardening AI against backdoors, data poisoning, evasion, and model-stealing threats.
Protecting training data and model internals from inference attacks and sensitive leakage.
Ensuring models behave safely and predictably in high-stakes and adversarial settings.
Building interpretable and auditable AI where decisions can be traced and validated.
Embedding responsible deployment norms across the AI lifecycle in critical systems.
Open-source systems and reproducible artifacts from recent lab work.
Privacy hardening for explanation methods against membership inference leakage.
Inference provenance graph analysis for behavioral diagnosis and targeted DNN repair.
Fine-grained training provenance tracking to detect clean-label backdoor poisoning.
Recent papers from DSPLab at flagship security and trustworthy AI venues.
Machine learning (ML) explainability is central to algorithmic transparency in high-stakes settings such as predictive diagnostics and loan approval. Yet these same domains demand …
Deep neural networks (DNNs) are increasingly being deployed in high-stakes applications, from self-driving cars to biometric authentication. However, their unpredictable and …
Relying on untrusted data exposes machine learning models to backdoor attacks, where adversaries poison training data to embed hidden behaviors. Existing defenses struggle against …
Researchers Building Trustworthy AI
Principal Investigator
University of Michigan-Dearborn
Associate Professor of Computer Science at the University of Michigan-Dearborn and Director of the Data-Driven Security & Privacy Lab (DSPLab).
R&D ML Engineer for Cybersecurity, Sandbox AQ
Ph.D., 2019-2024, University of Michigan-Dearborn.
Network Monitoring and Observability Manager, Ford Motor Company
M.Sc., 2023, University of Michigan-Dearborn.
Software Engineer, CBRE Investment Management
M.Sc., 2023, University of Michigan-Dearborn.
IT Security and Compliance Analyst, Bosch USA
M.Sc., 2023, University of Michigan-Dearborn.
IT Security Analyst, Auto-Owners Insurance
B.Sc., 2022, University of Michigan-Dearborn.
M.S. Student, University of Michigan-Dearborn
B.Sc., 2024, University of Michigan-Dearborn.
M.S. Student, University of Michigan-Dearborn
B.Sc., 2022, University of Michigan-Dearborn.
We are always open to discussing new projects, opportunities, or just having a chat.
Data-Driven Security & Privacy Lab (DSPLab)
Department of Computer and Information Science
University of Michigan-Dearborn
Dearborn, MI, USA
By appointment
Open positions are not currently available. You can still share your CV and interests if you would like to be considered for future opportunities.
Email Prof. Eshete