We study robustness (to training data poisoning, model evasion, model stealing), privacy (against training example membership inference), and the interaction among robustness, privacy, transparency, and fairness properties in machine learning.
Our focus is on systematic curation, characterization, measurement, and forensics of cyber threat intelligence (e.g., malware samples, infection traces, natural language threat descriptions).
We focus on analysis, reconstruction, measurement, and defense of cybercrime with focus on cybercrime activities (e.g., phishing, malware) cybercrimen toolkits (e.g., exploit kits, ransomware, and APTs).