Understanding the Representation Power of Graph Neural Networks in Learning Graph Topology

Nima Dehmamy, Albert-László Barabási, Rose Yu

Research output: Contribution to conference typesPaperpeer-review

Abstract (may include machine translation)

To deepen our understanding of graph neural networks, we investigate the representation power of Graph Convolutional Networks (GCN) through the looking glass of graph moments, a key property of graph topology encoding path of various lengths. We find that GCNs are rather restrictive in learning graph moments. Without careful design, GCNs can fail miserably even with multiple layers and nonlinear activation functions. We analyze theoretically the expressiveness of GCNs, concluding that a modular GCN design, using different propagation rules with residual connections could significantly improve the performance of GCN. We demonstrate that such modular designs are capable of distinguishing graphs from different graph generation models for surprisingly small graphs, a notoriously difficult problem in network science. Our investigation suggests that, depth is much more influential than width, with deeper GCNs being more capable of learning higher order graph moments. Additionally, combining GCN modules with different propagation rules is critical to the representation power of GCNs.
Original languageEnglish
Pages1-14
DOIs
StatePublished - 2019
Externally publishedYes
Event33rd Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada
Duration: 8 Dec 201914 Dec 2019

Conference

Conference33rd Conference on Neural Information Processing Systems, NeurIPS 2019
Country/TerritoryCanada
CityVancouver
Period8/12/1914/12/19

Keywords

  • APPROXIMATION

Fingerprint

Dive into the research topics of 'Understanding the Representation Power of Graph Neural Networks in Learning Graph Topology'. Together they form a unique fingerprint.

Cite this