Paper-Conference

DeepLeak: Privacy Enhancing Hardening of Model Explanations Against Membership Leakage featured image

DeepLeak: Privacy Enhancing Hardening of Model Explanations Against Membership Leakage

Machine learning (ML) explainability is central to algorithmic transparency in high-stakes settings such as predictive diagnostics and loan approval. Yet these same domains demand …

avatar
Firas Ben Hmida
DeepProv: Behavioral Characterization and Repair of Neural Networks via Inference Provenance Graph Analysis featured image

DeepProv: Behavioral Characterization and Repair of Neural Networks via Inference Provenance Graph Analysis

Deep neural networks (DNNs) are increasingly being deployed in high-stakes applications, from self-driving cars to biometric authentication. However, their unpredictable and …

avatar
Firas Ben Hmida
PoisonSpot: Precise Spotting of Clean-Label Backdoors via Fine-Grained Training Provenance Tracking featured image

PoisonSpot: Precise Spotting of Clean-Label Backdoors via Fine-Grained Training Provenance Tracking

Relying on untrusted data exposes machine learning models to backdoor attacks, where adversaries poison training data to embed hidden behaviors. Existing defenses struggle against …

avatar
Philemon Hailemariam
MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members featured image

MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members

In membership inference attacks (MIAs), an adversary observes the predictions of a model to determine whether a sample is part of the model's training data. Existing MIA defenses …

avatar
Ismat Jarin
Designing Secure Performance Metrics for Last-Level Cache featured image

Designing Secure Performance Metrics for Last-Level Cache

In modern CPU architectures, last-level caches (LLCs) are typically shared among multiple CPU cores. LLCs enable data sharing across application threads and promote data …

avatar
probir-roy
DeResistor: Toward Detection-Resistant Probing for Evasion of Internet Censorship featured image

DeResistor: Toward Detection-Resistant Probing for Evasion of Internet Censorship

The arms race between Internet freedom advocates and censors has catalyzed the emergence of sophisticated blocking techniques and directed significant research emphasis toward the …

avatar
Abe Amich

EG-Booster: Explanation-Guided Booster of ML Evasion Attacks

The widespread usage of machine learning (ML) in a myriad of domains has raised questions about its trustworthiness in securitycritical environments. Part of the quest for …

avatar
Abe Amich
DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning featured image

DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning

Differential Privacy (DP) has emerged as a rigorous formalism to reason about quantifiable privacy leakage. In machine learning (ML), DP has been employed to limit …

avatar
Ismat Jarin
Adversarial Detection of Censorship Measurements featured image

Adversarial Detection of Censorship Measurements

The arms race between Internet freedom technologists and censoring regimes has catalyzed the deployment of more sophisticated censoring techniques and directed significant research …

avatar
Abe Amich
PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party Setting featured image

PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party Setting

When multiple parties that deal with private data aim for a collaborative prediction task such as medical image classification, they are often constrained by data protection …

avatar
Ismat Jarin