Workshop
PAIR^2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data
Hao Wang · Wanyu LIN · Hao He · Di Wang · Chengzhi Mao · Muhan Zhang
Fri 29 Apr, 9 a.m. PDT
In these years, we have seen principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe. Specifically, Data Privacy, Accountability, Interpretability, {\bf R}obustness, and Reasoning have been broadly recognized as fundamental principles of using machine learning (ML) technologies on decision-critical and/or privacy-sensitive applications. On the other hand, in tremendous real-world applications, data itself can be well represented as various structured formalisms, such as graph-structured data (e.g., networks), grid-structured data (e.g., images), sequential data (e.g., text), etc. By exploiting the inherently structured knowledge, one can design plausible approaches to identify and use more relevant variables to make reliable decisions, thereby facilitating real-world deployments.In this workshop, we will examine the research progress towards accountable and ethical use of AI from diverse research communities, such as the ML community, security \& privacy community, and more. Specifically, we will focus on the limitations of existing notions on Privacy, Accountability, Interpretability, Robustness, and Reasoning. We aim to bring together researchers from various areas (e.g., ML, security \& privacy, computer vision, and healthcare) to facilitate discussions including related challenges, definitions, formalisms, and evaluation protocols regarding the accountable and ethical use of ML technologies in high-stake applications with structured data. In particular, we will discuss the interplay among the fundamental principles from theory to applications. We aim to identify new areas that call for additional research efforts. Additionally, we will seek possible solutions and associated interpretations from the notion of causation, which is an inherent property of systems. We hope that the proposed workshop is fruitful in building accountable and ethical use of AI systems in practice.
Schedule
Fri 9:00 a.m. - 9:05 a.m.
|
Introduction and Opening Remark
(
Introduction and Opening Remark
)
>
SlidesLive Video |
Hao Wang · Wanyu LIN 🔗 |
Fri 9:05 a.m. - 9:30 a.m.
|
On the Foundations of Causal Artificial Intelligence
(
Invited Talk
)
>
|
Elias Bareinboim 🔗 |
Fri 9:30 a.m. - 9:35 a.m.
|
Q&A with Elias Bareinboim
(
Q&A
)
>
|
Elias Bareinboim 🔗 |
Fri 9:35 a.m. - 10:05 a.m.
|
Privacy Meter Project: Towards Auditing Data Privacy and Q&A
(
Invited Talk
)
>
SlidesLive Video |
Reza Shokri 🔗 |
Fri 10:05 a.m. - 10:15 a.m.
|
Rethinking Stability for Attribution-based Explanations
(
Oral
)
>
link
SlidesLive Video |
Chirag Agarwal · Nari Johnson · Martin Pawelczyk · Satyapriya Krishna · Eshika Saxena · Marinka Zitnik · Hima Lakkaraju 🔗 |
Fri 10:15 a.m. - 10:40 a.m.
|
Trustworthy Machine Learning via Logic Reasoning
(
Invited Talk
)
>
|
Bo Li 🔗 |
Fri 10:40 a.m. - 10:45 a.m.
|
Q&A with Bo Li
(
Q&A
)
>
|
Bo Li 🔗 |
Fri 10:45 a.m. - 11:10 a.m.
|
Quantifying Privacy Risks of Machine Learning Models
(
Invited Talk
)
>
|
Yang Zhang 🔗 |
Fri 11:10 a.m. - 11:15 a.m.
|
Q&A with Yang Zhang
(
Q&A
)
>
|
Yang Zhang 🔗 |
Fri 11:15 a.m. - 11:25 a.m.
|
Invariant Causal Representation Learning for Generalization in Imitation and Reinforcement Learning
(
Oral
)
>
link
SlidesLive Video |
Chaochao Lu · José Miguel Hernández Lobato · Bernhard Schoelkopf 🔗 |
Fri 11:25 a.m. - 1:30 p.m.
|
Poster Session 1 ( Poster Session ) > link | 🔗 |
Fri 1:30 p.m. - 1:55 p.m.
|
Interpretable AI for Medical Imaging
(
Invited Talk
)
>
|
Lei Xing 🔗 |
Fri 1:55 p.m. - 2:00 p.m.
|
Q&A with Lei Xing
(
Q&A
)
>
|
Lei Xing 🔗 |
Fri 2:00 p.m. - 2:25 p.m.
|
Learning Structured Dynamics Models for Physical Reasoning and Robot Manipulation
(
Invited Talk
)
>
|
Jiajun Wu 🔗 |
Fri 2:25 p.m. - 2:30 p.m.
|
Q&A with Jiajun Wu
(
Q&A
)
>
|
Jiajun Wu 🔗 |
Fri 2:30 p.m. - 2:40 p.m.
|
Maximizing Entropy on Adversarial Examples Can Improve Generalization
(
Oral
)
>
link
SlidesLive Video |
Amrith Setlur · Benjamin Eysenbach 🔗 |
Fri 2:40 p.m. - 3:05 p.m.
|
Adapting Deep Predictors Under Causally Structured Shifts
(
Invited Talk
)
>
|
Zachary Lipton 🔗 |
Fri 3:05 p.m. - 3:10 p.m.
|
Q&A with Zachary Lipton
(
Q&A
)
>
|
Zachary Lipton 🔗 |
Fri 3:10 p.m. - 3:35 p.m.
|
Explainable AI in Practice: Challenges and Opportunities
(
Invited Talk
)
>
|
Himabindu Lakkaraju 🔗 |
Fri 3:35 p.m. - 3:40 p.m.
|
Q&A with Himabindu Lakkaraju
(
Q&A
)
>
|
Himabindu Lakkaraju 🔗 |
Fri 3:40 p.m. - 3:50 p.m.
|
Node-Level Differentially Private Graph Neural Networks
(
Oral
)
>
link
SlidesLive Video |
Ameya Daigavane · Gagan Madan · Aditya Sinha · Abhradeep Guha Thakurta · Gaurav Aggarwal · Prateek Jain 🔗 |
Fri 3:50 p.m. - 4:40 p.m.
|
Panel
(
Panel
)
>
|
🔗 |
Fri 4:40 p.m. - 6:00 p.m.
|
Poster Session 2 ( Poster Session ) > link | 🔗 |
-
|
REVERSING ADVERSARIAL ATTACKS WITH MULTIPLE SELF SUPERVISED TASKS ( Poster ) > link | Matthew Lawhon · Chengzhi Mao · Gustave Ducrest · Junfeng Yang 🔗 |
-
|
Global Counterfactual Explanations: Investigations, Implementations and Improvements ( Poster ) > link | Dan Ley · Saumitra Mishra · Daniele Magazzeni 🔗 |
-
|
Saliency Maps Contain Network "Fingerprints" ( Poster ) > link | Amy Widdicombe · Been Kim · Simon Julier 🔗 |
-
|
Geometrically Guided Saliency Maps ( Poster ) > link | Md Mahfuzur Rahman · Noah Lewis · Sergey Plis 🔗 |
-
|
ConceptDistil: Model-Agnostic Distillation of Concept Explanations ( Poster ) > link | João Pedro Sousa · Ricardo Moreira · Vladimir Balayan · Pedro Saleiro · Pedro Bizarro 🔗 |
-
|
Data Poisoning Attacks on Off-Policy Policy Evaluation Algorithms ( Poster ) > link | Elita Lobo · Harvineet Singh · Marek Petrik · Cynthia Rudin · Hima Lakkaraju 🔗 |
-
|
Efficient Privacy-Preserving Inference for Convolutional Neural Networks ( Poster ) > link | Han Xuanyuan · Francisco Vargas · Stephen Cummins 🔗 |
-
|
Post-hoc Concept Bottleneck Models ( Poster ) > link | Mert Yuksekgonul · Maggie Wang · James Y Zou 🔗 |
-
|
CLIP-Dissect: Automatic description of neuron representations in deep vision networks ( Poster ) > link | Tuomas Oikarinen · Tsui-Wei Weng 🔗 |
-
|
Robust Randomized Smoothing via Two Cost-Effective Approaches ( Poster ) > link | Linbo Liu · Trong Hoang · Lam Nguyen · Tsui-Wei Weng 🔗 |
-
|
Graphical Clusterability and Local Specialization in Deep Neural Networks ( Poster ) > link | Stephen Casper · Shlomi Hod · Daniel Filan · Cody Wild · Andrew Critch · Stuart Russell 🔗 |
-
|
Sparse Logits Suffice to Fail Knowledge Distillation ( Poster ) > link | Haoyu Ma · Yifan Huang · Hao Tang · Chenyu You · Deying Kong · Xiaohui Xie 🔗 |
-
|
User-Level Membership Inference Attack against Metric Embedding Learning ( Poster ) > link | Guoyao Li · Shahbaz Rezaei · Xin Liu 🔗 |
-
|
Towards Differentially Private Query Release for Hierarchical Data ( Poster ) > link | Terrance Liu · Steven Wu 🔗 |
-
|
Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity ( Poster ) > link | Shiyun Xu · Zhiqi Bu · Pratik A Chaudhari · Ian Barnett 🔗 |
-
|
Neural Logic Analogy Learning ( Poster ) > link | Yujia Fan · Yongfeng Zhang 🔗 |
-
|
Rethinking Stability for Attribution-based Explanations ( Poster ) > link | Chirag Agarwal · Nari Johnson · Martin Pawelczyk · Satyapriya Krishna · Eshika Saxena · Marinka Zitnik · Hima Lakkaraju 🔗 |
-
|
Maximizing entropy on adversarial examples can improve generalization ( Poster ) > link | Amrith Setlur · Benjamin Eysenbach 🔗 |
-
|
Node-Level Differentially Private Graph Neural Networks ( Poster ) > link | Ameya Daigavane · Gagan Madan · Aditya Sinha · Abhradeep Guha Thakurta · Gaurav Aggarwal · Prateek Jain 🔗 |
-
|
Invariant Causal Representation Learning for Generalization in Imitation and Reinforcement Learning ( Poster ) > link | Chaochao Lu · José Miguel Hernández Lobato · Bernhard Schoelkopf 🔗 |