Workshop
Security and Safety in Machine Learning Systems
Xinyun Chen · Cihang Xie · Ali Shafahi · Bo Li · Ding Zhao · Tom Goldstein · Dawn Song
Fri 7 May, 8:45 a.m. PDT
While machine learning (ML) models have achieved great success in many applications, concerns have been raised about their potential vulnerabilities and risks when applied to safety-critical applications. On the one hand, from the security perspective, studies have been conducted to explore worst-case attacks against ML models and therefore inspire both empirical and certifiable defense approaches. On the other hand, from the safety perspective, researchers have looked into safe constraints, which should be satisfied by safe AI systems (e.g. autonomous driving vehicles should not hit pedestrians). This workshop makes the first attempts towards bridging the gap of these two communities and aims to discuss principles of developing secure and safe ML systems. The workshop also focuses on how future practitioners should prepare themselves for reducing the risks of unintended behaviors of sophisticated ML models.
The workshop will bring together experts from machine learning, computer security, and AI safety communities. We attempt to highlight recent related work from different communities, clarify the foundations of secure and safe ML, and chart out important directions for future work and cross-community collaborations.
Schedule
Fri 8:45 a.m. - 9:00 a.m.
|
Opening Remarks
(
Talk
)
>
|
Xinyun Chen 🔗 |
Fri 9:00 a.m. - 9:01 a.m.
|
Speaker Introduction: Alina Oprea
(
Intro
)
>
|
🔗 |
Fri 9:01 a.m. - 9:30 a.m.
|
Invited Talk #1: Alina Oprea
(
Talk
)
>
SlidesLive Video |
Alina Oprea 🔗 |
Fri 9:30 a.m. - 9:35 a.m.
|
Live QA: Alina Oprea
(
QA
)
>
|
🔗 |
Fri 9:35 a.m. - 9:36 a.m.
|
Contributed Talk #1 Introduction
(
Intro
)
>
|
🔗 |
Fri 9:36 a.m. - 9:45 a.m.
|
Contributed Talk #1: Ditto: Fair and Robust Federated Learning Through Personalization
(
Talk
)
>
SlidesLive Video |
Tian Li · Ahmad Beirami · Virginia Smith 🔗 |
Fri 9:45 a.m. - 10:20 a.m.
|
Invited Talk #2: David Wagner
(
Talk
)
>
|
David Wagner 🔗 |
Fri 10:20 a.m. - 10:55 a.m.
|
Invited Talk #3: Zico Kolter
(
Talk
)
>
|
Zico Kolter 🔗 |
Fri 10:55 a.m. - 10:56 a.m.
|
Speaker Introduction: Alan Yuille
(
Intro
)
>
|
🔗 |
Fri 10:56 a.m. - 11:30 a.m.
|
Invited Talk #4: Alan Yuille
(
Talk
)
>
SlidesLive Video |
Alan Yuille 🔗 |
Fri 11:30 a.m. - 12:00 p.m.
|
Panel Discussion #1
(
Panel
)
>
|
Alina Oprea · David Wagner · Adam Kortylewski · Christopher Re · Tom Goldstein 🔗 |
Fri 12:00 p.m. - 1:00 p.m.
|
Poster Session #1 ( Poster Session ) > link | 🔗 |
Fri 1:00 p.m. - 1:20 p.m.
|
Lunch Break
|
🔗 |
Fri 1:20 p.m. - 1:21 p.m.
|
Speaker Introduction: Raquel Urtasun
(
Intro
)
>
|
🔗 |
Fri 1:21 p.m. - 2:00 p.m.
|
Invited Talk #5: Raquel Urtasun
(
Talk
)
>
|
Raquel Urtasun 🔗 |
Fri 2:00 p.m. - 2:35 p.m.
|
Invited Talk #6: Ben Zhao
(
Talk
)
>
|
Ben Zhao 🔗 |
Fri 2:35 p.m. - 2:36 p.m.
|
Speaker Introduction: Aleksander Madry
(
Intro
)
>
|
🔗 |
Fri 2:36 p.m. - 3:10 p.m.
|
Invited Talk #7: Aleksander Madry
(
Talk
)
>
SlidesLive Video |
Aleksander Madry 🔗 |
Fri 3:10 p.m. - 3:11 p.m.
|
Contributed Talk #2 Introduction
(
Intro
)
>
|
🔗 |
Fri 3:11 p.m. - 3:20 p.m.
|
Contributed Talk #2: RobustBench: a standardized adversarial robustness benchmark
(
Talk
)
>
SlidesLive Video |
francesco croce · Vikash Sehwag · Prateek Mittal · Matthias Hein 🔗 |
Fri 3:20 p.m. - 3:21 p.m.
|
Speaker Introduction: Christopher Re
(
Intro
)
>
|
🔗 |
Fri 3:21 p.m. - 3:55 p.m.
|
Invited Talk #8: Christopher Re
(
Talk
)
>
SlidesLive Video |
Christopher Re 🔗 |
Fri 3:55 p.m. - 3:56 p.m.
|
Speaker Introduction: Aditi Raghunathan
(
Intro
)
>
|
🔗 |
Fri 3:56 p.m. - 4:25 p.m.
|
Invited Talk #9: Aditi Raghunathan
(
Talk
)
>
SlidesLive Video |
Aditi Raghunathan 🔗 |
Fri 4:25 p.m. - 4:30 p.m.
|
Live QA: Aditi Raghunathan
(
QA
)
>
|
🔗 |
Fri 4:30 p.m. - 5:00 p.m.
|
Panel Discussion #2
(
Panel
)
>
|
Ben Zhao · Aleksander Madry · Aditi Raghunathan · Catherine Olsson 🔗 |
Fri 5:00 p.m. - 6:00 p.m.
|
Poster Session #2 ( Poster Session ) > link | 🔗 |
-
|
Hidden Backdoor Attack against Semantic Segmentation Models
(
Paper
)
>
SlidesLive Video |
Yiming Li 🔗 |
-
|
PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches
(
Paper
)
>
SlidesLive Video |
Chong Xiang 🔗 |
-
|
FIRM: Detecting Adversarial Audios by Recursive Filters with Randomization
(
Paper
)
>
SlidesLive Video |
Guanhong Tao 🔗 |
-
|
Simple Transparent Adversarial Examples
(
Paper
)
>
SlidesLive Video |
Jaydeep Borkar 🔗 |
-
|
Reliably fast adversarial training via latent adversarial perturbation
(
Paper
)
>
SlidesLive Video |
Sang Wan Lee 🔗 |
-
|
Safe Exploration Method for Reinforcement Learning under Existence of Disturbance
(
Paper
)
>
SlidesLive Video |
Yoshihiro Okawa 🔗 |
-
|
Accelerated Policy Evaluation with Adaptive Importance Sampling
(
Paper
)
>
SlidesLive Video |
Mengdi Xu 🔗 |
-
|
Mind the box: $l_1$-APGD for sparse adversarial attacks on image classifiers
(
Paper
)
>
SlidesLive Video |
francesco croce 🔗 |
-
|
RobustBench: a standardized adversarial robustness benchmark
(
Paper
)
>
SlidesLive Video |
francesco croce 🔗 |
-
|
Ditto: Fair and Robust Federated Learning Through Personalization
(
Paper
)
>
SlidesLive Video |
Tian Li 🔗 |
-
|
Measuring Adversarial Robustness using a Voronoi-Epsilon Adversary
(
Paper
)
>
SlidesLive Video |
Hyeongji Kim 🔗 |
-
|
Low Curvature Activations Reduce Overfitting in Adversarial Training
(
Paper
)
>
SlidesLive Video |
Vasu Singla 🔗 |
-
|
Extracting Hyperparameter Constraints From Code
(
Paper
)
>
SlidesLive Video |
Ingkarat Rak-amnouykit 🔗 |
-
|
Sparse Coding Frontend for Robust Neural Networks
(
Paper
)
>
SlidesLive Video |
Can Bakiskan 🔗 |
-
|
What is Wrong with One-Class Anomaly Detection?
(
Paper
)
>
SlidesLive Video |
JuneKyu Park 🔗 |
-
|
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting
(
Paper
)
>
SlidesLive Video |
Xiangyu QI 🔗 |
-
|
Incorporating Label Uncertainty in Intrinsic Robustness Measures
(
Paper
)
>
SlidesLive Video |
Xiao Zhang 🔗 |
-
|
Bridging the Gap Between Adversarial Robustness and Optimization Bias
(
Paper
)
>
SlidesLive Video |
Fartash Faghri 🔗 |
-
|
High-Robustness, Low-Transferability Fingerprinting of Neural Networks
(
Paper
)
>
SlidesLive Video |
Siyue Wang 🔗 |
-
|
Covariate Shift Adaptation for Adversarially Robust Classifier
(
Paper
)
>
SlidesLive Video |
Sudipan Saha 🔗 |
-
|
Coordinated Attacks Against Federated Learning: A Multi-Agent Reinforcement Learning Approach
(
Paper
)
>
SlidesLive Video |
Wen Shen 🔗 |
-
|
DEEP GRADIENT ATTACK WITH STRONG DP-SGD LOWER BOUND FOR LABEL PRIVACY
(
Paper
)
>
SlidesLive Video |
Sen Yuan 🔗 |
-
|
Byzantine-Robust and Privacy-Preserving Framework for FedML
(
Paper
)
>
SlidesLive Video |
Seyedeh Hanieh Hashemi 🔗 |
-
|
SHIFT INVARIANCE CAN REDUCE ADVERSARIAL ROBUSTNESS
(
Paper
)
>
SlidesLive Video |
Songwei Ge 🔗 |
-
|
Doing More with Less: Improving Robustness using Generated Data
(
Paper
)
>
SlidesLive Video |
Sven Gowal 🔗 |
-
|
Data Augmentation Can Improve Robustness
(
Paper
)
>
SlidesLive Video |
Sylvestre-Alvise Rebuffi 🔗 |
-
|
Speeding Up Neural Network Verification via Automated Algorithm Configuration
(
Paper
)
>
link
SlidesLive Video |
Matthias König 🔗 |
-
|
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers
(
Paper
)
>
SlidesLive Video |
Clayton C Ashcraft 🔗 |
-
|
Mitigating Adversarial Training Instability with Batch Normalization ( Paper ) > link | Arvind Sridhar 🔗 |
-
|
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
(
Paper
)
>
SlidesLive Video |
Eitan Borgnia 🔗 |
-
|
Provable defense by denoised smoothing with learned score function
(
Paper
)
>
SlidesLive Video |
Kyungmin Lee 🔗 |
-
|
Detecting Adversarial Attacks through Neural Activations
(
Paper
)
>
SlidesLive Video |
Graham Annett 🔗 |
-
|
Efficient Disruptions of Black-box Image Translation Deepfake Generation Systems
(
Paper
)
>
SlidesLive Video |
Nataniel Ruiz · Sarah A Bargal · Stanley Sclaroff 🔗 |
-
|
Poisoned classifiers are not only backdoored, they are fundamentally broken
(
Paper
)
>
|
Mingjie Sun · Mingjie Sun · Siddhant Agarwal · Zico Kolter 🔗 |
-
|
Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness
(
Paper
)
>
SlidesLive Video |
Linxi Jiang · James Bailey 🔗 |
-
|
Safe Model-based Reinforcement Learning with Robust Cross-Entropy Method
(
Paper
)
>
SlidesLive Video |
Zuxin Liu 🔗 |
-
|
GateNet: Bridging the gap between Binarized Neural Network and FHE evaluation
(
Paper
)
>
|
Cheng Fu 🔗 |
-
|
Non-Singular Adversarial Robustness of Neural Networks
(
Paper
)
>
SlidesLive Video |
Chia-Yi Hsu · Pin-Yu Chen 🔗 |
-
|
Adversarial Examples Make Stronger Poisons
(
Paper
)
>
SlidesLive Video |
Liam H Fowl · Micah Goldblum · Ping-yeh Chiang · Jonas Geiping · Tom Goldstein 🔗 |
-
|
What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors
(
Paper
)
>
SlidesLive Video |
Jonas Geiping · Liam H Fowl · Micah Goldblum · Michael Moeller · Tom Goldstein 🔗 |
-
|
Baseline Pruning-Based Approach to Trojan Detection in Neural Networks
(
Paper
)
>
SlidesLive Video |
Peter Bajcsy 🔗 |
-
|
Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters
(
Paper
)
>
SlidesLive Video |
Javier Carnerero-Cano 🔗 |
-
|
Examining Trends in Out-of-Domain Confidence ( Paper ) > link | Richard Liaw 🔗 |
-
|
$\delta$-CLUE: Diverse Sets of Explanations for Uncertainty Estimates
(
Paper
)
>
SlidesLive Video |
Dan Ley · Umang Bhatt · Adrian Weller 🔗 |
-
|
Boosting black-box adversarial attack via exploiting loss smoothness
(
Paper
)
>
link
SlidesLive Video |
Hoang Tran 🔗 |
-
|
On Improving Adversarial Robustness Using Proxy Distributions
(
Paper
)
>
SlidesLive Video |
Vikash Sehwag · Chong Xiang · Mung Chiang · Prateek Mittal 🔗 |
-
|
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
(
Paper
)
>
SlidesLive Video |
Liam H Fowl · Ping-yeh Chiang · Micah Goldblum · Jonas Geiping · Tom Goldstein 🔗 |
-
|
Robustness from Perception
(
Paper
)
>
SlidesLive Video |
Saeed Mahloujifar · Chong Xiang · Vikash Sehwag · Prateek Mittal 🔗 |
-
|
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks
(
Paper
)
>
SlidesLive Video |
Dequan Wang · David Wagner · Trevor Darrell 🔗 |
-
|
Moral Scenarios for Reinforcement Learning Agents
(
Paper
)
>
SlidesLive Video |
Dan Hendrycks · Mantas Mazeika · Andy Zou · Bo Li · Dawn Song 🔗 |