VAE to learn adjacency structure w/ edge weights

Hi

I’m interested in utilizing graph VAEs in a coupled auto-encoder setting. My data is as follows: I have two graphs: F \in \mathbb{R}^{N \times N} and S \in \mathbb{R}^{N \times N}. These will be dense, fully connected graphs, with continuous edge weights. Ultimately, I want to be able to predict F from S, and vice versa.

For now, I would like to know: how can I use a VAE approach, not to learn the binary adjacency structure, but to learn the edge weights? The inner product decoder is typically used to model the probability of an edge, but not the weight of the edge itself.

Hi,

I don’t get your question clearly. Do you want to learn a real-value adjacency matrix (which is equivalent to edge weights for all edges)? What’s the difference between “probability of an edge” and “the weight of the edge itself”?

Hi

Yes the intention is to learn a real-valued adjacency matrix, so learn the edge weights for all edges. The topology of the graph is fixed (fully connected).

Re. point #2: the inner product decoder is represented as \sigma(Z Z^{T}), which is generally used to learn the graph topology i.e. likelihood of an edge existing. In this case, I’m not interested in the topology, only in the edge weights.

For graph F, edge weights f_{i,j} \in [-1,1]. For graph S, edge weights s_{i,j} \in \mathbb{R}^{+}.

k

You can use g.update_all(fn.u_mul_e(...), ... to multiply the edge data with node data. Are you referring to the Graph VAE paper? There’s also dgl implementation for it at https://github.com/shionhonda/gae-dgl.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.