MetaPath2Vec Hyperparameter Selection

I’ve been experimenting with creating network embeddings for a heterogeneous graph using MetaPath2Vec. I was wondering if anyone has had any experience with tuning/selecting the hyperparameters (namely, the epochs, learning rate, negative sample size etc.)?

I tried comparing my embeddings generated through 25 epochs and 100 epochs and the latter seems to project the nodes into one big cluster (despite the loss continuously decreasing over every epoch), whereas 25 epochs creates sensibly separated clusters. I know this is very case-dependent, but is there a way to quantify when to stop training and selecting any other hyperparameters?

This is a general problem of measuring the quality of unsupervised embeddings. If you have a downstream application with a metric for such learned embeddings (e.g. classification accuracy), the most straightforward way is to use your metric to measure the quality of your embeddings, treating your self-supervised approach (e.g. Metapath2vec) as a part of the pipeline that generates your downstream prediction. Otherwise, you will have to resort to some general metrics (e.g. How To Evaluate Unsupervised Learning Models | by Callum Ballard | Towards Data Science)

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.