Learning Discrete Structures for Graph Neural Networks Luca Franceschi , Mathias Niepert, Massimilano Potil, Xiao He Poster later: Pacific Ballroom # 177
Introduction & Motivations Aim: apply Graph Neural Networks (GNN) to settings in which an input graph is not available (or it is incomplete/nosiy) 1
LDS: Jointly Learning Structure and Parameters Formulation: bilevel programming problem (gradient-based HPO) with discrete random variables ⇒ discrete and sparse graph Initialize Sample graphs Compute gradients of and parameters update GCN parameters Data points A 1 ~P θ ... w t+1 = Φ (w t ,A 1 ) = w t - γ ∇ L t (w t ,A 1 ) ... Graph θ generator: A τ ~P θ w GCN: w t+ τ = w t+ τ -1 - γ ∇ L t+ τ -1 (w t+ τ -1 ,A τ ) Compute hypergradients and update θ of graph generator ∇ θ 𝔽 [ F(w θ , τ , θ ) ] Validation nodes w t ... w t+ τ -1 w t+ τ θ 2 See Franceschi et al. Forward and Reverse Gradient-based Hyperparameter Optimization , ICML 2017
Experiments: Semi-supervised Learning 3
Experiments: Semi-supervised Learning 3
Experiments: Semi-supervised Learning 3
Experiments: Semi-supervised Learning 3
Experiments: Semi-supervised Learning 3
Experiments: Semi-supervised Learning 3
Experiments: Semi-supervised Learning 3
Experiments: Semi-supervised Learning 3
Many Thanks! Poster # 177 Github page: https://github.com/lucfra/LDS Some learned representations by a GCN on Citeseer Dense Graph k NN Graph LDS graph 4
Recommend
More recommend