About mini-batch?
I would like to ask how to do mini-batch training. If you get all embeddings on a whole graph and then update the losses in batches using part of nodes, is this correct? I feel a little unreasonable.
The pseudo code is as follows
The main code is
all_embedding =model(whole_graph, all_node_id, all_edge_type, all_edge_norm)
for every epoch:
for mini_batch_id in batches:
all_embedding =model(whole_graph, all_node_id, all_edge_type, all_edge_norm)
//get all node embedding on the whole graph
//get all node embedding on the whole graph
loss_every_batch = model.calc_loss(all_embedding, mini_batch_id)
//get loss on every batch
optimizer.zero_grad()
loss_every_batch.backward()
optimizer.step()
Thank you!!!