Frequently Asked Questions (FAQ)

  1. Are DGLGraphs directed or not? How to represent an undirected graph?

All DGLGraphs are directed. To represent an undirected graph, you need to create edges for both directions. dgl.to_bidirected can be helpful, which converts a DGLGraph into a new one with edges for both directions.

  1. How to sample a subgraph for the n-hop neighborhood of a node?

To sample the n-hop neighborhood of a node, one can use MultiLayerFullNeighborSampler.

  1. How to change the canonical edge types or node types of a heterogeneous graph once constructed?

DGL does not allow changing the canonical edge types or node types of a constructed heterogeneous graph. One needs to construct a new graph in this case.

  1. What happens to isolated nodes when performing message passing on a graph with them?

The results for them will be zero-valued tensors.

  1. I’ve installed the latest version of DGL, but dgl.__version__ still gives the old number.

You may have multiple versions of DGL installed and you need to first uninstall all old versions of DGL with pip uninstall. You can also check the installation path with dgl.__path__.

  1. How to construct a weighted graph from a weighted adjacency matrix?

You can represent a weighted adjacency matrix by a SciPy sparse matrix and pass it to dgl.from_scipy.

  1. Why my GPU memory usage increases after each evaluation stage with PyTorch?

You need to disable autograd using torch.no_grad.

  1. How to get deterministic training results?

You need to fix the random seed of Python, NumPy, backend framework (e.g., PyTorch), and DGL (with dgl.seed). Note that DGL does not guarantee deterministic training results in the following cases:

  1. Using min/max for reduce function
  2. Performing message passing on DGLGraphs with restricted format coo.
  1. How to set and save graph-level features?

A DGLGraph only stores node-level and edge-level features but not graph-level features. You can store them in bare tensors. To save them together with DGLGraphs, pass them to the labels argument of dgl.save_graphs.

  1. How to copy features from subgraphs to their parent graphs and vice versa?

All DGLGraphs’ APIs (e.g., dgl.node_subgraph and dgl.edge_subgraph) by default automatically extract features from the parent graphs. The feature extraction is efficient and only happens when they are accessed. To copy subgraph data back to their parent graphs, you can use the original node/edge IDs stored in the subgraph. Check out the API document for code examples.

  1. How to perform message passing on weighted graphs?

To multiply each message by the corresponding edge weight, you need to modify the message function passed to update_all. Consider the following examples:

import dgl.function as fn

# g.ndata['h'] stores the input node features
g.update_all(fn.copy_u('h', 'm'), fn.sum('m', 'h'))

To adapt it to weighted graphs

# g.edata['w'] stores the edge weights
g.update_all(fn.u_mul_e('h', 'w', 'm'), fn.sum('m', 'h'))
  1. How to batch heterogeneous graphs with different node/edge types?

DGL expects the heterogeneous graphs to have the same node/edge types in batching. If you want to batch graphs with different node/edge types, you can use placeholders as in the following example.

g1 = dgl.heterograph({('A', 'r1', 'B'): ([0, 1], [1, 2]), ('A', 'r2', 'C'): ([], [])}, num_nodes_dict={'A': 2, 'B': 3, 'C': 0})
g2 = dgl.heterograph({('A', 'r1', 'B'): ([], []), ('A', 'r2', 'C'): ([1, 2], [3, 4])}, num_nodes_dict={'A': 3, 'B': 0, 'C': 5})
bg = dgl.batch([g1, g2])
  1. How to combine node features and edge features in message passing?

The code snippet below presents two ways to do so.

import dgl.function as fn

# g.ndata['hn'] stores the input node features
# g.edata['he'] stores the input edge features

# Case1: Perform two rounds of message passing, 
# one using node features, one using edge features
g.update_all(fn.copy_u('hn', 'm'), fn.sum('m', 'hn_aggr'))
g.update_all(fn.copy_e('he', 'm'), fn.sum('m', 'he_aggr'))
# You can then further combine g.ndata['hn_aggr'] and g.ndata['he_aggr']

# Case2: First copy node features to edges and 
# then concatenate node features and edge features.
# It is recommended to follow case1 whenever possible 
# as this can consume a lot more memory. 
g.apply_edges(fn.copy_u('hn', 'hn'))
g.edata['he'] = torch.cat([g.edata['he'], g.edata['hn']], dim=1)
g.update_all(fn.copy_e('he', 'm'), fn.sum('m', 'he_aggr'))
  1. How to deal with graphs without features?

Possible solutions include:

  • Employ network embedding approaches like DeepWalk and use the output node representations as initial node features.
  • Generate structure-based initial node features like node degrees.
  • Learn node embeddings from scratch as in the following code snippet.
import torch.nn as nn

# assume g is a DGLGraph
embed = nn.Embedding(g.num_nodes(), feat_size)
g.ndata['h'] = embed.weight
  1. Does DGL have examples for inductive learning?

For node classification, see inductive learning with GraphSAGE on Reddit. The setting of graph property prediction is naturally inductive and you can find various examples for this task here.

2 Likes