Skip to content

Latest commit

 

History

History
194 lines (168 loc) · 17.7 KB

File metadata and controls

194 lines (168 loc) · 17.7 KB
layout default
title Relational Representation Learning

New: Submit your questions for the panel here!

Overview

Date and Time: 8:30 AM - 6:00 PM, December 8, 2018
Location: Room 517A, Palais des Congrès de Montréal, Montréal, Canada

Relational reasoning, i.e., learning and inference with relational data, is key to understanding how objects interact with each other and give rise to complex phenomena in the everyday world. Well-known applications include knowledge base completion and social network analysis. Although many relational datasets are available, integrating them directly into modern machine learning algorithms and systems that rely on continuous, gradient-based optimization and make strong i.i.d. assumptions is challenging. Relational representation learning has the potential to overcome these obstacles: it enables the fusion of recent advancements like deep learning and relational reasoning to learn from high-dimensional data. Success of such methods can facilitate novel applications of relational reasoning in areas such as scene understanding, visual question-answering, understanding chemical and biological processes, program synthesis and analysis, decision-making in multi-agent systems and many others.

How should we rethink classical representation learning theory for relational representations? Classical approaches based on dimensionality reduction techniques such as isoMap and spectral decompositions still serve as strong baselines and are slowly paving the way for modern methods in relational representation learning based on random walks over graphs, message-passing in neural networks, group-invariant deep architectures etc. amongst many others. How can systems be designed and potentially deployed for large scale representation learning? What are promising avenues, beyond traditional applications like knowledge base and social network analysis, that can benefit from relational representation learning?

This workshop aims to bring together researchers from both academia and industry interested in addressing various aspects of representation learning for relational reasoning.

Invited Speakers & Panelists

Speakers
Joan Bruna, New York University
Pedro Domingos, University of Washington
Lise Getoor, University of California, Santa Cruz
Timothy Lillicrap, Google Deepmind
Marina Meila, University of Washington
Maximilian Nickel, Facebook Artificial Intelligence Research

Panelists
Aditya Grover, Stanford University
William Hamilton, McGill/Facebook Artificial Intelligence Research
Jessica Hamrick, Google Deepmind
Thomas Kipf, University of Amsterdam
Paroma Varma, Stanford University
Marinka Zitnik, Stanford University

Schedule

Time Event
8:30-8:45 AM                 Welcome
8:45-9:00 AM Contributed Talk: Charlotte Bunne
Learning Generative Models across Incomparable Spaces
9:00-9:30 AM Invited Talk: Marina Meila
Measuring robustness from a single graph
9:30-9:45 AM Contributed Talk: Lingfei Wu
From Node Embedding to Graph Embedding: Scalable Global Graph Kernel via Random Features
9:45-10:15 AM Invited Talk: Timothy Lillicrap
Inductive Biases for Relational Representation and Learning
10:15-10:30 AM Poster Spotlights Talks
10:30-11:00 AM Coffee Break + Poster Session 1
11:00-11:30 AM Invited Talk: Joan Bruna
Community Detection with Non-backtracking Graph Neural Networks
11:30-11:45 AM Contributed Talk: Yunsheng Bai
Convolutional Set Matching for Graph Similarity
11:45-12:15 PM Invited Talk: Maximilian Nickel
Geometric Representation Learning in Relational Domains
12:15-2:00 PM Lunch
2:00-2:30 PM Invited Talk: Lise Getoor
The Power of Structure: Exploiting Relationships for Representation Learning
2:30-2:45 PM Contributed Talk: Robert Csordas
Improved Addressing in the Differentiable Neural Computer
2:45-3:00 PM Poster Spotlight Talks
3:00-3:30 PM Coffee Break + Poster Session 2
3:30-4:00 PM Invited Talk: Pedro Domingos
The Power of Objects and Relations in Deep Reinforcement Learning
4:00-4:45 PM Panel
Moderator: Paroma Varma
Panelists: Aditya Grover, William Hamilton, Jessica Hamrick, Thomas Kipf, Marinka Zitnik
4:45-5:45 PM Poster Session
5:45-6:00 PM Awards + Closing Remarks

Organizers

Aditya Grover, Stanford University
Paroma Varma, Stanford University
Fred Sala, Stanford University
Steven Holtzen, University of California, Los Angeles
Jennifer Neville, Purdue University
Stefano Ermon, Stanford University
Christopher Ré, Stanford University

Contact: r2learning@googlegroups.com

Accepted Papers

Program Committee

  • Albert Gu, Stanford University
  • Alexander Gaunt, Microsoft Research Best Reviewer Award
  • Alexander Ratner, Stanford University
  • Avner May, Stanford University
  • Beliz Gunel, Stanford University
  • Bryan He, Stanford University
  • Bryan Perozzi, Stonybrook University
  • Changping Meng, Purdue University
  • Daniel Levy, Stanford University
  • Daniel Kang, Stanford University
  • Golnoosh Farnadi, UC Santa Cruz
  • Guilherme Gomes, Purdue University
  • Happy Mittal, IIT Delhi
  • Hima Lakkaraju, Harvard University
  • Jared Dunnmon, Stanford University
  • Jiaming Song, Stanford University
  • Jian Zhang, Stanford University
  • Jiasen Yang, Purdue University
  • Jiaxuan You, Stanford University
  • Kristy Choi, Stanford University
  • Marinka Zitnik, Stanford University
  • Maruan Al-Shedivat, CMU
  • Max Lam, Stanford University
  • Megan Leszczynski, Stanford University
  • Mengyue Hang, Purdue University
  • Nikolaos Vasiloglou, RelationalAI
  • Oleksandr Polozov, Microsoft Research
  • Rex Ying, Stanford University
  • Sen Wu, Stanford University
  • Tal Friedman, UCLA
  • Thomas Kipf, University of Amsterdam
  • Tony Ginart, Stanford University
  • Tri Dao, Stanford University
  • William Hamilton, McGill/Facebook Artificial Intelligence Research
  • Yang Song, Stanford University
  • Yitao Liang, UCLA
  • Yujia Li, DeepMind
  • Zhaobin Kuang, Stanford University