RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [14328, 334]] is at version 1; expected version 0 instead

When I run the model HGSL, the acm4GTN dataset can work normally. When I switch to the dblp4GTN dataset or the imdb4GTN dataset, this error occurs:(the following is dblp4GTN)

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [14328, 334]] is at version 1; expected version 0 instead.

And I set the undirected_relations = author-paper,paper-conference for dataset dblp4GTN in config.ini file. I just can find the tensor shape [14328,334] is node type “paper” in variable h_dict, but I don’t know why it doesn’t work? Or could u add the datasets “dblp4GTN” and “imdb4GTN” for HGSL model? And what’s the differences between “acm4GTN” and “[others]4GTN”?

Hi, could you add more context and provide a minimal code snippet to reproduce your error?

Yeah. the following is HGSL model:
HGSL model

  1. Because I want to use other datasets besides acm4GTN, I modify the variable "undirected_relations " of the [HGSL] in the config.ini file. For example, I set the “undirected_relations = author-paper,paper-conference” for dblp4GTN in config.ini I have described above.

  2. set some parameters in main.py, as followed:

import argparse
from experiment import Experiment

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--model', '-m', default='HGSL', type=str, help='name of models')
    parser.add_argument('--task', '-t', default='node_classification', type=str, help='name of task')
    # link_prediction / node_classification
    parser.add_argument('--dataset', '-d', default='dblp4GTN', type=str, help='name of datasets')
    parser.add_argument('--gpu', '-g', default='0', type=int, help='-1 means cpu')
    parser.add_argument('--use_best_config', default=True, action='store_true', help='will load utils.best_config')
    parser.add_argument('--load_from_pretrained', action='store_true', help='load model from the checkpoint')
    args = parser.parse_args()

    experiment = Experiment(model=args.model, dataset=args.dataset, task=args.task, gpu=args.gpu,
                            use_best_config=args.use_best_config, load_from_pretrained=args.load_from_pretrained)
    res = experiment.run()
    print('res: {}'.format(res))

Other places remain unchanged.

It seems that it doen’t use DGL to construct the model. Could you try raising a issue in their repo?

it’s truly base on DGL. The following is their readme file. And I will arise an issue in their repo.

This is an open-source toolkit for Heterogeneous Graph Neural Network based on DGL [Deep Graph Library] and PyTorch. We integrate SOTA models of heterogeneous graph.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.