Are there message-passing layers that can't be used in graphs different than the ones they were trained in?

Hi, this is more of a theoretical question. I was reading the GAT paper and, in it, the authors claim that “a model trained on a specific structure can not be directly applied to a graph with a different structure”, because “the learned filters depend on the Laplacian eigenbasis, which depends on the graph structure.” Their focus is on spectral graph convolution operations, such as GCN and its predecessors, whose operations depend on the Laplacian matrix or the adjacency matrix of a graph, which seems to tie the model to the graph the model was made for. Each experiment in the GCN paper was performed in a single large graph of citations.

However, in libraries such as DGL and PyG, the implementations of these graph layers seem to do away with the direct use of Laplacian and adjacency matrices in favor of what looks like more generalized equations (see here). Does this mean it is safe to train the same neural network model, which uses one of these layers, in multiple graphs, with different structures?

You are right that though coming from spectral theory, GCN is now more commonly formulated as a message passing neural network. As a result, all these layers can be used inductively (i.e., tested on a different graph than the one for training). However, how effective are they in an inductive setting is still an open question.

Thanks for sharing the paper, it talks exactly about what I mentioned. They even have an appendix named “Interpretation of Laplacian based models as MPNNs”.