Hi
I’m interested in implementing a normalized cut criteria as an additional loss function. I want this normalized cut loss to be incorporated into the back-propagation. For example, if the output of a given network are the logits, and pushed through a F.softmax(logits) layer, I want to be able to apply the normalized cut criteria to the predicted labels of the softmax output.
How can I go about doing this? Other forums mentioned that Pytorch does not explicitly compute the one-hot encoding matrix for losses like CrossEntropy
. Below is an implementation that computes the ncut criteria – I’m just not sure if this would be incorporated into a global combination of loss functions such as LOSS = CrossEntropy() + NCuts()
and then be incorporated into the backprop computation.
import torch
import numpy as np
def normalized_cut(graph, one_hot):
"""
Parameters:
- - - - - - - - -
graph: DGL graph
graph structure of data
one_hot: int numpy array
one-hot encoding matrix of predicted labels
Returns:
- - - -
loss: torch float tensor
ncut loss value
"""
A = graph.adjacency_matrix()
d = torch.sparse.sum(A, dim=0)
one_hot = torch.tensor(one_hot, requires_grad=True).float()
assoc = torch.matmul(one_hot.t(), torch.matmul(A, one_hot))
degree = torch.matmul(d, one_hot)
loss = torch.nansum(assoc.diag() / degree)
return -loss
Any help is appreciated – perhaps this question might be more appropriate for the Pytorch forum? Thanks.