Skip to main content
. 2023 Mar 10;18(3):e0279604. doi: 10.1371/journal.pone.0279604

Table 2. Table of symbols used in this paper.

Symbol Definition
G = (V, E) G: input graph, V: node set, E: edge set.
|V| = n n: number of nodes.
vV nodes in G.
xvRc xv: node feature vector, c: feature dimension.
XRn×c initial feature matrix.
hik node embedding of node i in the kth layer.
N(v) set of one-hop neighbors of node v in G.
M(·) message aggregation function.
H k matrix of activations in the kth layer.
W k a layer-specific trainable weight matrix.
A~ adjacency matrix of the undirected graph G with added self-connections.
D~ degree matrix of undirected graph G.
σ(·) activation function.
f(i, j) a function used to calculate the attention coefficient between i and j
g(·) feature transformation function.
δ k the weight matrix decay parameter of the kth layer
Z k the kth layer output of DGCNNII.
AGATRn×n adjacency matrix based on node attention coefficients.
α, β, γ hyper-parameters for adjusting the proportion of information aggregation.
I n identity matrix.
a the weight vector that projects the concatenate vector to the scalar