Hi, this is more of a theoretical question. I was reading the GAT paper and, in it, the authors claim that “a model trained on a specific structure can not be directly applied to a graph with a different structure”, because “the learned filters depend on the Laplacian eigenbasis, which depends on the graph structure.” Their focus is on spectral graph convolution operations, such as GCN and its predecessors, whose operations depend on the Laplacian matrix or the adjacency matrix of a graph, which seems to tie the model to the graph the model was made for. Each experiment in the GCN paper was performed in a single large graph of citations.
However, in libraries such as DGL and PyG, the implementations of these graph layers seem to do away with the direct use of Laplacian and adjacency matrices in favor of what looks like more generalized equations (see here). Does this mean it is safe to train the same neural network model, which uses one of these layers, in multiple graphs, with different structures?