Negative sampling on block diagonal matrix

Hi

I’m interested in doing negative sampling on a graph for node classification. My training graph is a block diagonal matrix where each block is a single training example. I want to perform negative sampling on each block independently – that is, I want to generate negative samples for each block using only the nodes from that same block.

Apart from not calling dgl.batch(list_of_graphs) and iterating over the individual graphs, how can I go about doing this?

Thanks

k

As far as I understand, negative sampling is more common in link prediction tasks. However, it seems that you are doing node classification. Could you elaborate on what the negative examples are?

In this case, I’m working on a segmentation task, where the nodes are part of a triangulated mesh G=(V,E). I’m interested in implementing a similar loss function to that described by GraphSAGE

J_{G}(z_{u}) = -log(\sigma(z_{u}^{T}z_{v})) - Q \cdot \mathbb{E}_{v_{n}\sim P_{n}(v)}log(\sigma(z_{u}^{T}z_{v_{n}}))

so the negative samples will essentially just generate random links between nodes in the mesh. However, I have many meshes in my training data, each with their own node feature matrices. I do not want to shuffle node features across meshes when computing this loss.

k

Is your negative example a random graph with the same number of nodes? If so, you will need to generate the negative example graph individually, and batch() them altogether.

Other than that, something like “generate a random integer array with each element having its own low and high range” can help you:

def rand_int(lows, highs):
    diff = highs - lows
    rand = (torch.rand(len(lows)) * diff).long()
    return lows + rand
1 Like

Hi @BarclayII

Thank you – yes, this is what I ended up doing. It seems to be a reasonable approach thus far.

k

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.