I wanted to implement the idea in this paper in my GCN model :
the general idea of paper is this :
As the name suggests, Non-Negative MalConv was constrained during training to have non-negative weight matrices. The point of doing this is to prevent trivial attacks like those created against MalConv. When done properly, the non-negative weights make binary classifiers monotonic; meaning that the addition of new content can only increase the malicious score. This would make evading the model very difficult, because most evasion attacks do require adding content to the file. Fortunately for me, this implementation of Non-Negative MalConv has a subtle but critical flaw.
Right now my model is very similar to this : https://docs.dgl.ai/en/0.4.x/tutorials/basics/4_batch.html
so how can i implement this idea in this code? basically i want my GCN to learn to only decide based on malicious features in nodes, and adding benign features in a node (i don’t have node labels, by benign i mean features that adding a lot of them will cause a model to converge to labeling the graph as benign) will not cause a model to give higher probability for the graph of being benign, and only adding malicious features will cause the graph classification probability to change
for example if each node has 50 features, and in the vanila GCN, the adding to the value of second feature will cause the graph classification to converge to being benign, in this non-negative GCN adding this feature will not affect the model.
any idea how can i implement this in GCN? is it possible?