PAIR2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data
Workshop at ICLR 2022

Overview

In these years, we have seen principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe. Specifically, Data Privacy, Accountability, Interpretability, Robustness, and Reasoning have been broadly recognized as fundamental principles of using machine learning (ML) technologies on decision-critical and/or privacy-sensitive applications. On the other hand, in tremendous real-world applications, data itself can be well represented as various structured formalisms, such as graph-structured data (e.g., networks), grid-structured data (e.g., images), sequential data (e.g., text), etc. By exploiting the inherently structured knowledge, one can design plausible approaches to identify and use more relevant variables to make reliable decisions, thereby facilitating real-world deployments.

In this workshop, we will examine the research progress towards accountable and ethical use of AI from diverse research communities, such as the ML community, security & privacy community, and more. Specifically, we will focus on the limitations of existing notions on Privacy, Accountability, Interpretability, Robustness, and Reasoning. We aim to bring together researchers from various areas (e.g., ML, security & privacy, computer vision, and healthcare) to facilitate discussions including related challenges, definitions, formalisms, and evaluation protocols regarding the accountable and ethical use of ML technologies in high-stake applications with structured data. In particular, we will discuss the interplay among the fundamental principles from theory to applications. We aim to identify new areas that call for additional research efforts. Additionally, we will seek possible solutions and associated interpretations from the notion of causation, which is an inherent property of systems. We hope that the proposed workshop is fruitful in building accountable and ethical use of AI systems in practice.

Call For Papers

All submissions are due by Mar 5 (previously Feb 26) '22 11:59 PM UTC.

Topics include but are not limited to:

• Privacy-preserving machine learning methods on structured data (e.g., graphs, manifolds, images, and text).
• Theoretical foundations for privacy-preserving and/or explainability of deep learning on structured data (e.g., graphs, manifolds, images, and text).
• Interpretability and accountability in different application domains including healthcare, bioinformatics, finance, physics, etc.
• Improving interpretability and accountability of black-box deep learning with graphical abstraction (e.g., causal graphs, graphical models, computational graphs).
• Robust machine learning methods via graphical abstraction (e.g., causal graphs, graphical models, computational graphs).
• Relational/graph learning under robustness constraints (robustness in face of adversarial attacks, distribution shift, environment changes, etc.).

Paper submissions: To format your paper for submission please use the main conference LaTeX style files. The workshop has a strict maximum page limit of 5 pages for the main text and 3 pages for the supplemental text. Citations may use additional, unlimited pages.

Submission page: ICLR 2022 PAIR2Struct Workshop.

Please note that ICLR policy states: "Workshops are not a venue for work that has been previously published in other conferences on machine learning. Work that is presented at the main ICLR conference should not appear in a workshop."

Submission deadline: March 5 (previously February 26), 2022 at 11:59 PM UTC (i.e., 6:59 PM EST and 3:59 PM PST)

Author notifications: March 26, 2022

Workshop: April 29, 2022

Schedule

Date: April 29, 2022            Location: Online

PDT EDT CEST BJT Event
09:00 - 09:05 12:00 - 12:05 18:00 - 18:05 00:00 - 00:05 Introduction and Opening Remarks
09:05 - 09:35 12:05 - 12:35 18:05 - 18:35 00:05 - 00:35 Invited Talk 1
09:35 - 10:05 12:35 - 13:05 18:35 - 19:05 00:35 - 01:05 Invited Talk 2
10:05 - 10:15 13:05 - 13:15 19:05 - 19:15 01:05 - 01:15 Contributed Talk 1
10:15 - 10:45 13:15 - 13:45 19:15 - 19:45 01:15 - 01:45 Invited Talk 3
10:45 - 11:15 13:45 - 14:15 19:45 - 20:15 01:45 - 02:15 Invited Talk 4
11:15 - 11:25 14:15 - 14:25 20:15 - 20:25 02:15 - 02:25 Contributed Talk 2
11:25 - 13:30 14:25 - 16:30 20:25 - 22:30 02:25 - 04:30 Poster Session 1 & Break
13:30 - 14:00 16:30 - 17:00 22:30 - 23:00 04:30 - 05:00 Invited Talk 5
14:00 - 14:30 17:00 - 17:30 23:00 - 23:30 05:00 - 05:30 Invited Talk 6
14:30 - 14:40 17:30 - 17:40 23:30 - 23:40 05:30 - 05:40 Contributed Talk 3
14:40 - 15:10 17:40 - 18:10 23:40 - 00:10 05:40 - 06:10 Invited Talk 7
15:10 - 15:40 18:10 - 18:40 00:10 - 00:40 06:10 - 06:40 Invited Talk 8
15:40 - 15:50 18:40 - 18:50 00:40 - 00:50 06:40 - 06:50 Contributed Talk 4
15:50 - 16:40 18:50 - 19:40 00:50 - 01:40 06:50 - 07:40 Panel Discussion
16:40 - 18:00 19:40 - 21:00 01:40 - 03:00 07:40 - 09:00 Poster Session 2

Invited Speakers

...

Bo Li

University of Illinois at Urbana-Champaign

...

Hima Lakkaraju

Harvard University

...

Reza Shokri

National University of Singapore

...

Elias Bareinboim

Columbia University

...

Yang Zhang

CISPA Helmholtz Center for Information Security

...

Jiajun Wu

Stanford University

...

Lei Xing

Stanford University

...

Zachary Chase Lipton

Carnegie Mellon University

Organizers

...

Wanyu Lin

Hong Kong Polytechnic University

...

Hao Wang

Rutgers University

...

Hao He

Massachusetts Institute of Technology

...

Di Wang

King Abdullah University of Science and Technology

...

Chengzhi Mao

Columbia University

...

Muhan Zhang

Peking University