Node regression: GCN not learning, MLP has no problem

I have a set of independent graphs that have features “degree” and “strat” (either 1 or 0), and I’m trying to predict feature * strat. I trained an MLP to do this based on the features of individual nodes. No problem.

I try to train a GCN to do the same thing:

But the loss doesn’t improve and on inspecting the logits, I notice they’re all very similar and nowhere close to the labels:

The GCN example works like this:

def forward(self, features):

And I edited it to be able to pass it a graph like this:

def forward(self, g, features):

And it breaks. I have no idea why. But I do need it to take a diff graph each time, so how do I accomplish that?

create a list of graphs in iterate over it

graphs = [g1, g2, ...]
for graph in graphs:
    sgd.zero_grad()
    n_feats = graph.ndata['feat']
    labels = graph.ndata['label']
    out = gcn(graph, n_feats)
    loss = lloss_fnc(out, labels)
    loss.backward()
    sgd.step()

that’s what I do in training_loop

def train_loop(model, train_data, loss_fcn, optimizer, num_epochs=1):
    dur = []
    for epoch in range(num_epochs):
        for step, g in enumerate(train_data):
            model.train()  # sets training mode rather than evaluation
            if step >= 3:
                t0 = time.time()
            # forward
            logits = model(g)
            loss = loss_fcn(logits, labels)

            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

it fails to learn, as described

a batch full of graphs works perfectly, yet passing them sequentially doesn’t!

I need to pass them sequential to use GCN in RL. Any thoughts?

I narrowed the problem down so I made a new, clearer thread

redirected to Train GCN with many graphs instead of one batch (for RL).