The code of metapath2vec [examples/pytorch/metapath2vec.py]

why are new optimizer and scheduler generated every iteration?
Maybe each iteration will accumulate the GPU memory. because the optimizer will copy the parameters of the model

in the source code:

for iteration in range(self.iterations):
    print("\n\n\nIteration: " + str(iteration + 1))
    optimizer = optim.SparseAdam(self.skip_gram_model.parameters(), lr=self.initial_lr)
    scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, len(self.dataloader))
    
    batch_x, batch_y ......

usually we train model like this:

    optimizer = optim.SparseAdam(self.skip_gram_model.parameters(), lr=self.initial_lr)
    scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, len(self.dataloader))
    for iteration in range(self.iterations):
        print("\n\n\nIteration: " + str(iteration + 1))
        batch_x, batch_y ......

Looks like a bug. Would you mind pushing a PR to fix this?

ok,the PR is: https://github.com/dmlc/dgl/pull/2463

2 Likes

Thanks! I’m on it. We are fixing some issues on our CI so it can’t be merged right now. Will let you know once it’s merged.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.