Will DGL support sparse node features?

DGL Version: 0.8.1
Pytorch Version: 1.9.0

Suppose that there is one graph:
g = dgl.graph(([3], [2]))

Then set node features using torch sparse tensors.

i = [[2, 3], [0, 0]]
v =  [3, 4]
t = torch.sparse_coo_tensor(i, v, (4, 1))
g.ndata['h'] = t

but when trying to save the graph, it raise one NotImplementedError:

/opt/conda/lib/python3.8/site-packages/dgl/backend/pytorch/tensor.py in zerocopy_to_dgl_ndarray(data)
    347 else:
    348     def zerocopy_to_dgl_ndarray(data):
--> 349         return nd.from_dlpack(dlpack.to_dlpack(data.contiguous()))
    350 
    351 def zerocopy_to_dgl_ndarray_for_write(input):

NotImplementedError: Tensors of type SparseTensorImpl do not have is_contiguous.

Seems that DGL does not support sparse tensors as node features when saving graphs. Will it be supported in the future?

One workaround is that transforming graphs to compact graphs, and using dense tensors as node features. But the node ids of the graph will change and behave unexpectedly in functions such as dgl.merge().

DGL currently does not support sparse node features. I guess you need to save those features separately.

Thanks for your kind reply! I will have a try to save features separately.
BTW, what about the workaround mentioned in the above? I have tried to merge compact graphs with node features, but it seems hard when distinct nodes of different graphs have the same re-ordered ids.

You could compact those graphs together so that the resulting node sets are the same:

g1, g2, ... = dgl.compact_graphs([g1, g2, ...])
1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.