I’m working on Tree-LSTM and I have an implementation like in the tutorial, it works well. But when I’m looking at GPU utility and GPU memory allocation, it’s only about 20% and it doesn’t increase when I make a bigger batch of graphs and, as a result, time of training still doesn’t change too.
As I understand, it is because of the size of NodeBatch in UDF-functions, so my question is:
Why I can’t change the size of NodeBatch and maybe you can give me advice, how I can increase GPU utility? Maybe some hacks?