Graph representation learning is a recurring task in applications such as computational chemistry, recommendation, reasoning, or learning for combinatorial optimization. Throughout, understanding the generalization, invariances and out-of-distribution robustness of graph neural networks in an important challenge.
First, we consider out-of-distribution generalization in widely used message passing graph neural networks (MPGNNs). We aim to understand conditions under which such generalization is possible. Another important consideration for defining data shifts is an appropriate metric. We show that a pseudometric combining trees and optimal transport correlates well with the stability of MPGNNs.
Second, many approaches to graph representation learning exploit spectral information. However, eigenvectors and eigenspaces demand specific model invariances to process them in a consistent way. We propose a new architecture that encodes these invariances, can be combined with MPNNs, transformers and other set architectures, and theoretically and empirically goes beyond existing models.
This talk is based on joint work with Ching-Yao Chuang, Joshua Robinson, Derek Lim, Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, Lingxiao Zhao, Tess Smidt, Suvrit Sra and Haggai Maron.
This talk will also be broadcast via Zoom. Please register to receive viewing information.
Stefanie Jegelka is an X-Consortium Career Development Associate Professor in the Department of EECS at MIT. She is a member of the Computer Science and AI Lab (CSAIL), the Center for Statistics and an affiliate of IDSS and ORC. Before joining MIT, she was a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, Google research awards, a Two Sigma faculty research award, the German Pattern Recognition Award and a Best Paper Award at the International Conference for Machine Learning (ICML). She has also been invited as a sectional lecturer at the ICM 2022. She has served as an Area Chair for NeurIPS and ICML, as Action Editor for JMLR and as Program Chair for ICML 2022. Her research interests span the theory and practice of algorithmic machine learning.
Lunch & Socializing
Lunch will be provided on a first come, first serve basis.