Nodeflow example with Torch

Hi,

I’m trying to run nodeflow exampale (1_sampling_mx.py) with torch’s tensors instead of mx and mx.block. I keep getting “raise DGLError(py_str(_LIB.DGLGetLastError())) dgl._ffi.base.DGLError: [20:28:29] /Users/xiangsx/work/dgl/dgl/src/kernel/cpu/…/binary_reduce_impl.h:112: Unsupported dtype: int64”

In the error stack, the following line seems to be the offender.
" g.update_all(message_func=fn.copy_src(src=‘h’, out=‘m’),
reduce_func=fn.sum(msg=‘m’, out=‘h’),
apply_node_func=lambda node: {‘h’: layersi[‘activation’]})"

By the way, I had to comment out layer.initialize() because torch.nn.module doesn’t have it as mx.Block has.

Any idea how I could be fixing it?

Thanks so much.

Hi,

We are deprecating old NodeFlow and changes it into more intuitive APIs. Please refer to the example at https://github.com/dmlc/dgl/blob/master/examples/pytorch/graphsage/train_sampling.py . NodeFlow is similar to the block in this example

Thanks VoVAllen!
That works like a charm.

Are the new APIs out yet?
I was trying to call DGL’s graph into my torch_geometric’s models.
I’d love to see how the new APIs will process my data.

Yes. As the example posted above. Why would you like to call this inside torch_geometric?

So, I have a dataset of about 500 graphs ranging from 5000 to 25000 nodes. I built a torch_geometric based pretraining script for processing it. DGL seems to have the perfect setup to handle such large graphs. However, I couldn’t find any pretraining–finetuning scripts /code samples. Hence that attempt.

Hi,

Which model are you interested in? How would you like to pretrain those models?

I’m looking at GTN-based approach and a GIN-based one. Currently, I’m using random node-masking for pretraining and if things workout well, I may move to edge-representation too.