How to handle images embeddings in PinSAGE?

Hello,

I’m trying DGL implementation of PinSAGE. I see that examples included in repo use text embeddings that are created with torchtext. I prefer to use image embeddings of items instead - is that possible?
I tried to do this in the processing part with that:

g.nodes['item'].data['image_embedding'] = torch.FloatTensor(list(items['image_embedding'].values))

I can see that model creates LinearProjector layer for these embeddings, but I’m not sure if this is a correct way to handle this. Do you have any other proposition for that?

Best regards
K

Hi,

We haven’t tried image feature before. But I think you can start with intermediate representation from ResNet or VGGNet.

Yes, I understand that. But my question is how to handle these embeddings? Is the way that I’ve mentioned correct? Or should I do that somehow else?

Hi,

What do you exactly mean on handle? Did you mean how to set the features to the node data?

Yes, that’s right. I can see that text embeddings for nodes (made with torchtext) are more complicated and are set during training.

Your grammar is right. You just need to set the tensor to the node data

1 Like