I have created a heterograph with the following features
Graph(num_nodes={‘buyer’: 305171, ‘seller’: 31375},
num_edges={(‘buyer’, ‘amount’, ‘buyer’): 349379, (‘buyer’, ‘txn_date’, ‘buyer’): 349379},
metagraph=[(‘buyer’, ‘buyer’, ‘amount’), (‘buyer’, ‘buyer’, ‘txn_date’)])
I am creating an RGCN model following this link 5.1 Node Classification/Regression — DGL 0.6.1 documentation for a Node classification. After the conv1 layer, I see the ‘buyer’ feature from the node_features are removed. After the conv2 layer, the ‘seller’ node and features are also removed resulting in empty h_dict after this step
h_dict = model(hetero_graph, {'user': user_feats, 'item': item_feats})
The way I have created the graph is:
Nodes: buyer, seller
Edges: amount, txn_date
graph_data = {
(‘buyer’, ‘amount’, ‘seller’): (amount_edge_index[0], amount_edge_index[1]),
(‘buyer’, ‘txn_date’, ‘seller’): (date_edge_index[0], date_edge_index[1])
}
hetero_graph = dgl.heterograph(graph_data).to(device)
hetero_graph.nodes[‘buyer’].data[‘feature’] = buyer_x
hetero_graph.nodes[‘buyer’].data[‘label’] = y
hetero_graph.nodes[‘seller’].data[‘feature’] = merchant_x
hetero_graph.nodes[‘buyer’].data[‘train_mask’] = torch.zeros(hetero_graph.num_nodes(‘buyer’), dtype=torch.bool).bernoulli(0.6).to(‘cuda’)
hetero_graph.to(device)
And the model
class RGCN(nn.Module):
def init(self, in_feats, hid_feats, out_feats, rel_names):
super().init()
self.conv1 = dglnn.HeteroGraphConv({
rel: dglnn.GraphConv(in_feats, hid_feats)
for rel in rel_names}, aggregate='sum')
self.conv2 = dglnn.HeteroGraphConv({
rel: dglnn.GraphConv(hid_feats, out_feats)
for rel in rel_names}, aggregate='sum')
def forward(self, graph, inputs):
# inputs are features of nodes
h = self.conv1(graph, inputs)
h = {k: F.relu(v) for k, v in h.items()}
h = self.conv2(graph, h)
return h