About the properties of user-defined graphs

For dgl, we can create a directed graph like this:

such as:seq:[958, 2781, 2781, 13598, 10166, 30589, 10166]

items = np.unique(seq)
iid2nid = {iid: i for i, iid in enumerate(items)}
num_nodes = len(items)

seq_nid = [iid2nid[iid] for iid in seq]
counter = Counter(
[(seq_nid[i], seq_nid[j]) for i in range(len(seq)) for j in range(i, len(seq))])
edges = counter.keys()
src, dst = zip(*edges)
g = dgl.graph((src, dst), num_nodes=num_nodes)

1.We can add attributes to edata and ndata,But can I keep the seq_nid obtained above as an attribute in the graph? The number is actually not equal to node
2.For n_etypes in dgl.nn.pytorch.GataedGraphConv, I don’t quite understand what this parameter corresponds to.
The following is the code for GGNN in sequence recommendation. Are the two equivalent?

GGNN:

class GNN(nn.Module):
    r"""Graph neural networks are well-suited for session-based recommendation,
    because it can automatically extract features of session graphs with considerations of rich node connections.
    """

    def __init__(self, embedding_size, step=1):
        super(GNN, self).__init__()
        self.step = step
        self.embedding_size = embedding_size
        self.input_size = embedding_size * 2
        self.gate_size = embedding_size * 3
        self.w_ih = Parameter(torch.Tensor(self.gate_size, self.input_size))
        self.w_hh = Parameter(torch.Tensor(self.gate_size, self.embedding_size))
        self.b_ih = Parameter(torch.Tensor(self.gate_size))
        self.b_hh = Parameter(torch.Tensor(self.gate_size))
        self.b_iah = Parameter(torch.Tensor(self.embedding_size))
        self.b_ioh = Parameter(torch.Tensor(self.embedding_size))

        self.linear_edge_in = nn.Linear(self.embedding_size, self.embedding_size, bias=True)
        self.linear_edge_out = nn.Linear(self.embedding_size, self.embedding_size, bias=True)

    def GNNCell(self, A, hidden):
        r"""Obtain latent vectors of nodes via graph neural networks.

        Args:
            A(torch.FloatTensor):The connection matrix,shape of [batch_size, max_session_len, 2 * max_session_len]

            hidden(torch.FloatTensor):The item node embedding matrix, shape of
                [batch_size, max_session_len, embedding_size]

        Returns:
            torch.FloatTensor:Latent vectors of nodes,shape of [batch_size, max_session_len, embedding_size]

        """

        input_in = torch.matmul(A[:, :, :A.size(1)], self.linear_edge_in(hidden)) + self.b_iah
        input_out = torch.matmul(A[:, :, A.size(1): 2 * A.size(1)], self.linear_edge_out(hidden)) + self.b_ioh
        # [batch_size, max_session_len, embedding_size * 2]
        inputs = torch.cat([input_in, input_out], 2)

        # gi.size equals to gh.size, shape of [batch_size, max_session_len, embdding_size * 3]
        gi = F.linear(inputs, self.w_ih, self.b_ih)
        gh = F.linear(hidden, self.w_hh, self.b_hh)
        # (batch_size, max_session_len, embedding_size)
        i_r, i_i, i_n = gi.chunk(3, 2)
        h_r, h_i, h_n = gh.chunk(3, 2)
        resetgate = torch.sigmoid(i_r + h_r)
        inputgate = torch.sigmoid(i_i + h_i)
        newgate = torch.tanh(i_n + resetgate * h_n)
        hy = (1 - inputgate) * hidden + inputgate * newgate
        return hy

    def forward(self, A, hidden):
        for i in range(self.step):
            hidden = self.GNNCell(A, hidden)
        return hidden

In the original GGNN paper: https://arxiv.org/pdf/1511.05493.pdf, the edges are attributed with types, and the num_etypes flag indicates the total number of edge types in the graph.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.