- Are DGLGraphs directed or not? How to represent an undirected graph?
All DGLGraphs are directed. To represent an undirected graph, you need to create edges for both directions. dgl.to_bidirected can be helpful, which converts a DGLGraph into a new one with edges for both directions.
- How to sample a subgraph for the n-hop neighborhood of a node?
To sample the n-hop neighborhood of a node, one can use MultiLayerFullNeighborSampler.
- How to change the canonical edge types or node types of a heterogeneous graph once constructed?
DGL does not allow changing the canonical edge types or node types of a constructed heterogeneous graph. One needs to construct a new graph in this case.
- What happens to isolated nodes when performing message passing on a graph with them?
The results for them will be zero-valued tensors.
- I’ve installed the latest version of DGL, but
dgl.__version__
still gives the old number.
You may have multiple versions of DGL installed and you need to first uninstall all old versions of DGL with pip uninstall
. You can also check the installation path with dgl.__path__
.
- How to construct a weighted graph from a weighted adjacency matrix?
You can represent a weighted adjacency matrix by a SciPy sparse matrix and pass it to dgl.from_scipy.
- Why my GPU memory usage increases after each evaluation stage with PyTorch?
You need to disable autograd using torch.no_grad.
- How to get deterministic training results?
You need to fix the random seed of Python, NumPy, backend framework (e.g., PyTorch), and DGL (with dgl.seed). Note that DGL does not guarantee deterministic training results in the following cases:
- Using min/max for reduce function
- Performing message passing on DGLGraphs with restricted format
coo
.
- How to set and save graph-level features?
A DGLGraph only stores node-level and edge-level features but not graph-level features. You can store them in bare tensors. To save them together with DGLGraphs, pass them to the labels
argument of dgl.save_graphs.
- How to copy features from subgraphs to their parent graphs and vice versa?
All DGLGraphs’ APIs (e.g., dgl.node_subgraph
and dgl.edge_subgraph
) by default automatically extract features from the parent graphs. The feature extraction is efficient and only happens when they are accessed. To copy subgraph data back to their parent graphs, you can use the original node/edge IDs stored in the subgraph. Check out the API document for code examples.
- How to perform message passing on weighted graphs?
To multiply each message by the corresponding edge weight, you need to modify the message function passed to update_all
. Consider the following examples:
import dgl.function as fn
# g.ndata['h'] stores the input node features
g.update_all(fn.copy_u('h', 'm'), fn.sum('m', 'h'))
To adapt it to weighted graphs
# g.edata['w'] stores the edge weights
g.update_all(fn.u_mul_e('h', 'w', 'm'), fn.sum('m', 'h'))
- How to batch heterogeneous graphs with different node/edge types?
DGL expects the heterogeneous graphs to have the same node/edge types in batching. If you want to batch graphs with different node/edge types, you can use placeholders as in the following example.
g1 = dgl.heterograph({('A', 'r1', 'B'): ([0, 1], [1, 2]), ('A', 'r2', 'C'): ([], [])}, num_nodes_dict={'A': 2, 'B': 3, 'C': 0})
g2 = dgl.heterograph({('A', 'r1', 'B'): ([], []), ('A', 'r2', 'C'): ([1, 2], [3, 4])}, num_nodes_dict={'A': 3, 'B': 0, 'C': 5})
bg = dgl.batch([g1, g2])
- How to combine node features and edge features in message passing?
The code snippet below presents two ways to do so.
import dgl.function as fn
# g.ndata['hn'] stores the input node features
# g.edata['he'] stores the input edge features
# Case1: Perform two rounds of message passing,
# one using node features, one using edge features
g.update_all(fn.copy_u('hn', 'm'), fn.sum('m', 'hn_aggr'))
g.update_all(fn.copy_e('he', 'm'), fn.sum('m', 'he_aggr'))
# You can then further combine g.ndata['hn_aggr'] and g.ndata['he_aggr']
# Case2: First copy node features to edges and
# then concatenate node features and edge features.
# It is recommended to follow case1 whenever possible
# as this can consume a lot more memory.
g.apply_edges(fn.copy_u('hn', 'hn'))
g.edata['he'] = torch.cat([g.edata['he'], g.edata['hn']], dim=1)
g.update_all(fn.copy_e('he', 'm'), fn.sum('m', 'he_aggr'))
- How to deal with graphs without features?
Possible solutions include:
- Employ network embedding approaches like DeepWalk and use the output node representations as initial node features.
- Generate structure-based initial node features like node degrees.
- Learn node embeddings from scratch as in the following code snippet.
For an example of the last solution, see below.
import dgl
import torch
import torch.nn as nn
from dgl.nn import GraphConv
from torch.optim import Adam
num_nodes = 5
emb_size = 5
g = dgl.rand_graph(num_nodes=num_nodes, num_edges=25)
embed = nn.Embedding(num_nodes, emb_size)
model = GraphConv(emb_size, 1)
optimizer = Adam(list(model.parameters()) + list(embed.parameters()), lr=1e-3)
labels = torch.zeros((num_nodes, 1))
criteria = nn.BCEWithLogitsLoss()
num_epochs = 5
for _ in range(num_epochs):
pred = model(g, embed.weight)
loss = criteria(pred, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
- Does DGL have examples for inductive learning?
For node classification, see inductive learning with GraphSAGE on Reddit, GAT on PPI. The setting of graph property prediction is naturally inductive and you can find various examples for this task here.
- How to construct adjacency matrices from
g.edata['...']
?
Assume g.edata['...']
is a tensor of shape (E, 1), where E is the number of edges.
num_nodes = g.num_nodes()
adj = torch.zeros(num_nodes, num_nodes)
src, dst = g.edges()
adj[dst, src] = g.edata['...']
- What are the differences between ndata and srcdata/dstdata?
As indicated by the names, srcdata
and dstdata
respectively refers to the data of source nodes and destination nodes. For a graph of a single node type, there’s no distinction between source nodes and destination nodes. In this case, ndata
, srcdata
, and dstdata
are all equivalent.
srcdata
and dstdata
are different for graphs whose source nodes and destination nodes belong to two different node types. This is the case for a bipartite graph, a relation slice of a heterogeneous graph or a message flow graph. Users will then need to use srcdata
and dstdata
and carefully distinguish between them. For example, users need to set node features for message passing with srcdata
and set node features for a residual connection with dstdata
.
- How to apply a GNN to multiple graphs of a same structure?
Either you can explicitly create multiple graphs of a same structure and batch them or you can stack the input features, assign multi-dimensional input features to a single graph and try adapting your model to them. For example, you can stack multiple input features of shape (N, D)
and assign an input feature of shape (N, B, D)
to the graph. For example
import dgl
import dgl.function as fn
import torch
import torch.nn as nn
num_nodes = 10
num_edges = 40
g = dgl.rand_graph(num_nodes, num_edges)
num_channels = 3
in_size = 10
out_size = 5
feat = torch.randn(num_nodes, num_channels, in_size)
model = nn.Linear(in_size, out_size)
g.ndata['feat'] = model(feat)
g.update_all(fn.copy_u('feat', 'm'), fn.sum('m', 'feat_new'))
- I’m a Windows user and when I import DGL it says “Module not found”.
Please check if you have VC2015 Redistributable installed. If you are a GPU user, you might also want to check if CUDA libraries can be found in system PATH.
- How to convert a classification model into a regression model?
You need to change the output size of the model from the number of classes to the number of labels to regress. You will also need to change the loss function and the evaluation metric.
- How to create a heterogeneous graph from local data files?
See user guide here.