Lap_norm in deepwalk

Hi everyone,

What is the purpose of lap_norm in deepwalk model? Specifically, why do we need the second term of
score * emb_pos_v + self.lap_norm * (emb_pos_v - emb_pos_u)
while updaing grad_u_pos?

Please see


Thanks.

Lap norm refer to laplacian normalization.

This is one optimization we used for DeepWalk by introducing laplacian normalization.

Thank you very much for you reply.
Could you please further explain this?
What does the Laplacian normalization here really mean? Why do we need to introduce Laplacian normalization here, and what is the benefit of doing this?

Can anyone help explain this?

Intuitively, by adopting Laplacian Normalization as a regularizer, the embeddings of nodes in positive samples will be pushed together by minimizing the distance of ||x_i - x_j||. There are also some other benefits by doing this like:

  1. accelerating convergence
  2. guaranteeing the performance of proximity preservation
  3. improving the skip-gram’s performance in preserving non-linear structures

Though the theoretical proofs have not been published yet (still under review), similar usage of Laplacian Normalization can be found in this paper:

Thank you very much for your reply and the reference.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.