Memory leak with NeighborSampler

Hi, we are iterating over graphs and trying to apply NeighborSampler to these graphs. In order to make it work, the graphs have to be readonly (immutable). However, this will cause a memory leak.

Below is some minimal code to show the problem we are facing. Is there any way to avoid this?

Thanks!

import  psutil
import torch
import dgl

def mem():
    my_mem = psutil.virtual_memory()
    return 'Memory used: {:.2f} %, {:.2f} MB | free: {:.2f} MB'.format(my_mem.percent,my_mem.used/1024/1024,my_mem.free/1024/1024)


def create_graph(batch_size=1):
    graphs = []
    source_nodes= []
    counter = 0
    for i in range(batch_size):
        g = dgl.DGLGraph()
        g.add_nodes(20001)
        dst = [0] * 10000 + [1] * 10000
        src = list(range(1,20001))
        g.add_edges(src, dst)
        g.data = torch.ones(20001, 20)
        g.ndata.update({'data': g.data})
        counter += g.number_of_nodes()
        source_nodes.append(counter)
        
    graphs.append(g)
    graphs = dgl.batch(graphs)
   
    return graphs, source_nodes 
expand_factor = 500
batch_size  = 32
for epoch in range(501):
    # Iterate over different graphs
    input_graphs, source_nodes= create_graph(batch_size)
    # memory leak happens
    input_graphs.readonly() 
    """
    Batched_Graph_Sampler = dgl.contrib.sampling.sampler.NeighborSampler(g = input_graphs,
                                                                  batch_size = batch_size,
                                                                  expand_factor = expand_factor,
                                                                  num_hops = 2, 
                                                                  neighbor_type='in', 
                                                                  seed_nodes= source_nodes, 
                                                                  shuffle=False,
                                                                  num_workers=4, 
                                                                  prefetch=False, 
                                                                  add_self_loop=False)
    """
    if epoch % 10 == 0:
        print (epoch, mem())

Out:

0 Memory used: 3.90 %, 3788.69 MB | free: 84411.16 MB
10 Memory used: 3.90 %, 3826.34 MB | free: 84373.52 MB
20 Memory used: 4.00 %, 3865.22 MB | free: 84334.64 MB
30 Memory used: 4.00 %, 3903.12 MB | free: 84296.74 MB
40 Memory used: 4.00 %, 3942.48 MB | free: 84257.36 MB
50 Memory used: 4.10 %, 3981.61 MB | free: 84218.23 MB
60 Memory used: 4.10 %, 4020.99 MB | free: 84178.86 MB
70 Memory used: 4.10 %, 4059.38 MB | free: 84140.47 MB
80 Memory used: 4.10 %, 4099.25 MB | free: 84100.60 MB
90 Memory used: 4.20 %, 4137.14 MB | free: 84062.70 MB
100 Memory used: 4.20 %, 4177.48 MB | free: 84022.36 MB
110 Memory used: 4.20 %, 4216.61 MB | free: 83983.23 MB
120 Memory used: 4.30 %, 4255.46 MB | free: 83944.38 MB
130 Memory used: 4.30 %, 4294.35 MB | free: 83905.50 MB
140 Memory used: 4.30 %, 4333.23 MB | free: 83866.62 MB
150 Memory used: 4.40 %, 4371.84 MB | free: 83827.98 MB
160 Memory used: 4.40 %, 4409.99 MB | free: 83789.84 MB
170 Memory used: 4.40 %, 4450.29 MB | free: 83749.54 MB
180 Memory used: 4.40 %, 4487.93 MB | free: 83711.89 MB
190 Memory used: 4.50 %, 4527.04 MB | free: 83672.77 MB
200 Memory used: 4.50 %, 4565.68 MB | free: 83634.14 MB
210 Memory used: 4.50 %, 4604.81 MB | free: 83595.01 MB
220 Memory used: 4.60 %, 4643.44 MB | free: 83556.37 MB
230 Memory used: 4.60 %, 4683.31 MB | free: 83516.50 MB
240 Memory used: 4.60 %, 4721.45 MB | free: 83478.36 MB
250 Memory used: 4.70 %, 4760.09 MB | free: 83439.72 MB
260 Memory used: 4.70 %, 4799.46 MB | free: 83400.35 MB
270 Memory used: 4.70 %, 4839.09 MB | free: 83360.73 MB
280 Memory used: 4.70 %, 4877.48 MB | free: 83322.34 MB
290 Memory used: 4.80 %, 4915.62 MB | free: 83284.19 MB
300 Memory used: 4.80 %, 4955.55 MB | free: 83244.26 MB
310 Memory used: 4.80 %, 4994.12 MB | free: 83205.69 MB
320 Memory used: 4.90 %, 5033.25 MB | free: 83166.56 MB
330 Memory used: 4.90 %, 5071.15 MB | free: 83128.66 MB
340 Memory used: 4.90 %, 5111.27 MB | free: 83088.55 MB
350 Memory used: 5.00 %, 5149.42 MB | free: 83050.40 MB
360 Memory used: 5.00 %, 5189.29 MB | free: 83010.54 MB
370 Memory used: 5.00 %, 5227.43 MB | free: 82972.39 MB
380 Memory used: 5.00 %, 5266.31 MB | free: 82933.51 MB
390 Memory used: 5.10 %, 5305.93 MB | free: 82893.89 MB
400 Memory used: 5.10 %, 5344.32 MB | free: 82855.50 MB
410 Memory used: 5.10 %, 5383.09 MB | free: 82816.73 MB
420 Memory used: 5.20 %, 5422.07 MB | free: 82777.74 MB
430 Memory used: 5.20 %, 5461.61 MB | free: 82738.20 MB
440 Memory used: 5.20 %, 5501.19 MB | free: 82698.62 MB
450 Memory used: 5.30 %, 5540.34 MB | free: 82659.47 MB
460 Memory used: 5.30 %, 5578.84 MB | free: 82620.83 MB
470 Memory used: 5.30 %, 5618.22 MB | free: 82581.46 MB
480 Memory used: 5.40 %, 5656.61 MB | free: 82543.07 MB
490 Memory used: 5.40 %, 5696.23 MB | free: 82503.45 MB
500 Memory used: 5.40 %, 5732.81 MB | free: 82466.86 MB

Hi,

We will be deprecating dgl.contrib.sampling.sampler.NeighborSampler in favor of the new dgl.sampling APIs. A tutorial is available in https://github.com/dglai/WWW20-Hands-on-Tutorial/blob/master/large_graphs/large_graphs.ipynb, and we have updated our GraphSAGE examples with this training method in https://github.com/dmlc/dgl/tree/master/examples/pytorch/graphsage.

Could you please take a look?

Thanks.

@BarclayII Thanks for your reply! I Will try the new API.