Connecting a linear layer to a homogenous graph

I have a linear layer which has an output size of 128 dimensions
I have a graph which I have passed through gcn and increased the node size to 128 dimensions as well.

The problem is, I want to connect the outputs of the linear layer to the zero in-degree nodes of my graph. Can I do that?

This is the sample model

class gnn(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(4,128)
        self.gcn = GraphConv(128,128, allow_zero_in_degree=True)

   def forward(self, x, dgl_graph, feat, weight):
        x = torch.flatten(x,1)
        x = self.fc1(x)
        x = F.relu(x)
        graph_out = self.gcn(dgl_graph, feat=feat, edge_weight=weight)
        return out

Are you asking how to filter out the zero degree nodes in the graph?

Yes. So that it won’t be a problem when I use prop_nodes. When I use prop_nodes function I am getting a warning of the graph contains invalid edges. Is this behaviour expected?

Why are you using prop_nodes?

For filtering zero degree nodes, you can try zero_degree_node_ids = th.where(g.in_degrees() == 0)

I need to use prop_nodes to pull sequentially the nodes and apply some user defined message passing function of my own. I want to concatenate the embeddings generated by GCN with my edge features. But I am not able to do that. Can I get some help regarding that?

def edge_udf(self, edges):
nodesToPull = edges.src
# Here I want to concatenate the gcn generated embeddings(of the pulled nodes) with the edge #features of the graph and return that as the message function. But I am not able to understand how to #do it for the same tensors pulled by prop_nodes function.
return {‘m’: finalvalue}

Hi,

Could you provide a code sample snippet? I don’t think I understand your problem currently

Sure. Suppose I have one node feature ‘h’ and 2 edge features ‘e’. Now what I want in my message function is that, I want the 20 dimension embedding(generated by gcn) of the node to be multiplied with my edge feature ‘e’. How can I do that?

class gnn(nn.Module):
def init(self):
super().init()
self.store = 0
self.embed = None
self.fc1 = nn.Linear(4,7)
self.fcfeat1 = nn.Linear(4,20)
self.gcn = GraphConv(20, 20, allow_zero_in_degree=True)

def edge_udf(self, edges):
    # multiply the particular node dimension with its edge feature 'e'
    m = dimension(edges.src) * 'e'
    return {'m': m}

def forward(self, x, dgl_graph, feat, edges):
    x = torch.flatten(x,1)
    x = self.fc1(x)
    x = F.relu(x)
    self.store = x
    feat = self.fcfeat1(feat)
    x = self.gcn(dgl_graph, feat=feat)
    x = F.relu(x)
    self.embed = x
    reduce_func = dgl.function.sum('m', 'h')
    dgl.prop_nodes_topo(dgl_graph, self.edge_udf, reduce_func)
    out = dgl_graph.ndata['h']
    return out

You can use

import dgl.function as fn
g.update_all(fn.u_mul_e('h', 'e'), fn.sum('e', 'out'))

to set data on the node or

def efunc(edges):
  return {'m': edges.src['h']*edges.data['e']}
g.apply_edges(efunc) 

to put data on edges

But this operation updates the whole graph in parallel manner. I want it to happen in a sequential and levelized manner. That’s the reason why I opted for prop_nodes function. Is this the correct approach?