Message passing with layer

I’m reading the docs comprehensively and now I have a question. Is it correct that I want to use a layer in the reduce_func ?
For example, if I want to use a nn.linear to alter a message , I might define the linear layer in init funciton. and the following might be like:

    def reduce_func(self,nodes):
        return {'feat': self.emb(nodes.data['h'],nodes.mailbox['m'])} # self.emb denotes a nn.linear() or some other layers
    def def forward(self, g, h):
        funcs = dict()
        for c_etype in g.canonical_etypes:
            #srctype, etype, dsttype = c_etype
            #print(c_etype)
            funcs[c_etype] = (fn.v_sub_u('hv', 'hu', 'm'), self.reduce_func)
        
        g.multi_update_all(funcs, 'sum')

I am curious about this mindless thought and I’d appreciate any advice!

yes, you could use nn modules in reduce function.

Thanks for your answer. Since I am still learning dgl, I’d appreciate it if you could recommend some practical examples. @Rhett-Ying

Besides the tutorial and user guide in doc page, you could try pick up an example in https://www.dgl.ai/: Find an example to get started

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.