Hi,

when creating the negative graph from a batched graph for link predicition, nodes from each subgraph doesnt mix with eachother, right?

Thanks.

Hi,

when creating the negative graph from a batched graph for link predicition, nodes from each subgraph doesnt mix with eachother, right?

Thanks.

EdgeDataLoader treats a batched graph as a single large graph, so the nodes will mix with each other if you use the default builtin Uniform negative sampler.

That being said, you can write your own negative sampler so that each positive example only receive the negative examples from its own graph (which is possible with `g.batch_num_nodes()`

).

1 Like

Thanks for your answer, once again. So how could I fix the following function? Fixing the source and destination nodes with `g.batch_num_nodes()`

code?

```
def construct_negative_graph(graph, k, etype):
utype, _, vtype = etype
src, dst = graph.edges(etype=etype)
neg_src = src.repeat_interleave(k)
neg_dst = torch.randint(0, graph.num_nodes(vtype), (len(src) * k,))
return dgl.heterograph(
{etype: (neg_src, neg_dst)},
num_nodes_dict={ntype: graph.num_nodes(ntype) for ntype in graph.ntypes})
```

Say that your batched graph consists of N graphs each with `n[0], n[1], ..., n[N-1]`

nodes. If you sample 1 negative example per positive example, you will need to change your `neg_dst`

so that each element in `neg_dst[sum(n[0:i]):sum(n[0:i+1])]`

is sampled from `sum(n[0:i])`

to `sum(n[0:i+1])`

.

For homogeneous graph you can do it like

```
# get n[0], n[1], ..., n[N-1]
num_nodes = graph.batch_num_nodes()
# compute sum(n[0:i]) for every i
node_offset = torch.cat([torch.LongTensor([0]), torch.cumsum(num_nodes, 0)], 0)[:-1]
# sample
neg_dst = (torch.rand(graph.num_nodes()) * num_nodes.repeat_interleave(num_nodes)).long()
neg_dst += node_offset.repeat_interleave(num_nodes)
```

You may need to adjust the code above to heterogeneous graphs and more than 1 negative examples.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.