Advances in Data Science Seminar: Generalization and Representational Limits of Graph Neural Networks

Time: 14:00 - 15:00

Venue: Online

Sorry, this event has now ended.

Leading researchers present their recent advances in Machine Learning and Computational Statistics. This is an online seminar from the University of Manchester's Institute for Data Science & Artificial Intelligence.

We will be video recording all of the talks in this year's Advances in Data Science seminar series. If you have registered for any of the seminars and oppose to us doing so, please notify us immediately.

Speaker: Vikas Garg (Aalto University)

Title: Generalization and Representational Limits of Graph Neural Networks.

Abstract: Graphs provide a natural abstraction to model relational and strategic data in domains as diverse as biology (e.g., molecules), multiagent settings (e.g., online vendors on ecommerce platforms), and distributed systems (e.g., Internet). Graphs also find much use as theoretical objects (e.g., probabilistic graphical models), and several important algorithms (e.g., max-flow for image segmentation) can be invoked when tasks are formulated in terms of graphs. 

Graph neural networks (GNNs) are state-of-the-art models for embedding graphs, and thus naturally suited for making predictions based on graphs but they remain poorly understood in terms of what they can and cannot do.  In this talk, I will describe our recent work that addresses two fundamental questions about GNNs.  First, we analyze whether GNNs can distinguish graphs that differ in properties such as cycles, but have similar local structure. We then provide the first data dependent generalization bounds for message passing GNNs.
The talk will be based on joint work with Stefanie Jegelka (MIT) and Tommi Jaakkola (MIT): https://www.mit.edu/~vgarg/GNN_CameraReady.pdf

Bio: Vikas Garg is an Assistant Professor at Aalto University. He recently graduated from MIT, where his PhD in Computer Science was supervised by Prof. Tommi Jaakkola. His research interests in machine learning include generative models, graphical models, theory of deep learning, and learning under uncertainty or resource constraints along with their intersections with optimization and game theory.