DeepLeak
Privacy hardening for explanation methods against membership inference leakage.
Areas Covered Across the AI Lifecycle
Defending AI against poisoning, backdoors, evasion, and model extraction attacks.
Preventing leakage of sensitive training information and inference-time privacy risks.
Building reliable systems that avoid unsafe or unintended model behavior in critical settings.
Making model behavior interpretable, traceable, and auditable.
Embedding ethical norms and accountability into lifecycle-wide AI governance.
Reproducible systems and open artifacts from active lab efforts.
Privacy hardening for explanation methods against membership inference leakage.
Inference provenance graph analysis for behavioral diagnosis and targeted DNN repair.
Fine-grained training provenance tracking to detect clean-label backdoor poisoning.