jittor_geometric.nn.conv
Convolutional layers used in Graph Neural Networks.
Description: Author: lusz Date: 2025-01-10 13:52:59
- class jittor_geometric.nn.conv.APPNP(K, alpha, spmm=True, **kwargs)[source]
Bases:
Module
The graph propagation operator from the “Predict then Propagate: Graph Neural Networks meet Personalized PageRank” paper
- class jittor_geometric.nn.conv.BernNet(K, spmm=True, **kwargs)[source]
Bases:
Module
The graph propagation operator from the “BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation” paper
Mathematical Formulation: .. math:
\mathbf{Z} = \sum_{k=0}^{K} \alpha_k \mathrm{Bern}_{k}(\tilde{L}) \mathbf{X}.
- where:
\(\mathbf{X}\) is the input node feature matrix. \(\mathbf{Z}\) is the output node feature matrix. \(\mathrm{Bern}_{k}\) is the Bernstein polynomial of order \(k\). \(\tilde{\mathbf{L}}\) is the normalized Laplacian matrix of the graph, translated to the interval \([-1,1]\). \(\alpha_k\) is the parameter for the \(k\)-th order Bernstein polynomial.
- Parameters:
K (int) – Order of polynomial, or maximum number of hops considered for message passing.
spmm (bool, optional) – If set to True, uses sparse matrix multiplication (SPMM) for propagation. Default is True.
**kwargs (optional) – Additional arguments for the MessagePassing class.
- class jittor_geometric.nn.conv.ChebConv(in_channels, out_channels, K, normalization='sym', bias=True, **kwargs)[source]
Bases:
MessagePassing
The chebyshev spectral graph convolutional operator from the “Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering” paper
\[\mathbf{X}^{\prime} = \sum_{k=1}^{K} \mathbf{Z}^{(k)} \cdot \mathbf{\Theta}^{(k)}\]where \(\mathbf{Z}^{(k)}\) is computed recursively by
\[ \begin{align}\begin{aligned}\mathbf{Z}^{(1)} &= \mathbf{X}\\\mathbf{Z}^{(2)} &= \mathbf{\hat{L}} \cdot \mathbf{X}\\\mathbf{Z}^{(k)} &= 2 \cdot \mathbf{\hat{L}} \cdot \mathbf{Z}^{(k-1)} - \mathbf{Z}^{(k-2)}\end{aligned}\end{align} \]and \(\mathbf{\hat{L}}\) denotes the scaled and normalized Laplacian \(\frac{2\mathbf{L}}{\lambda_{\max}} - \mathbf{I}\).
- Parameters:
in_channels (int) – Number of input features per node.
out_channels (int) – Number of output features per node.
K (int) – Order of Chebyshev polynomials used in the layer.
normalization (str, optional) – Type of Laplacian normalization. Options are:
"sym"
(symmetric),"rw"
(random walk), orNone
(no normalization). Default is"sym"
.bias (bool, optional) – Whether to include a learnable bias term. Default is
True
.**kwargs (optional) – Additional arguments for the MessagePassing class.
- message(x_j, norm)[source]
Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in
edge_index
. This function can take any argument as input which was initially passed topropagate()
. Furthermore, Var passed topropagate()
can be mapped to the respective nodes \(i\) and \(j\) by appending_i
or_j
to the variable name, .e.g.x_i
andx_j
.
- class jittor_geometric.nn.conv.ChebNetII(K, spmm=True, **kwargs)[source]
Bases:
Module
- The graph propagation operator from the `”Convolutional Neural Networks
on Graphs with Chebyshev Approximation, Revisited”
<https://arxiv.org/abs/2202.03580>`_ paper
Mathematical Formulation: .. math:
\mathbf{Z} = \sum_{k=0}^{K} \alpha_k \mathrm{cheb}_{k}(\tilde{\mathbf{L}}) \mathbf{X}.
- where:
\(\mathbf{X}\) is the input node feature matrix. \(\mathbf{Z}\) is the output node feature matrix. \(\mathrm{cheb}_{k}\) is the Chebyshev polynomial of order \(k\). \(\alpha_k\) is the parameter for the \(k\)-th order Chebyshev polynomial, they are further derived via learnable values on the Chebyshev nodes. \(\tilde{L}\) is the normalized Laplacian matrix of the graph, translated to the interval \([-1,1]\).
- Parameters:
K (int) – Order of polynomial, or maximum number of hops considered for message passing.
spmm (bool, optional) – If set to True, uses sparse matrix multiplication (SPMM) for propagation. Default is True.
**kwargs (optional) – Additional arguments for the MessagePassing class.
- class jittor_geometric.nn.conv.ClusterGCNConv(in_channels, out_channels, diag_lambda=0.0, improved=False, cached=False, add_self_loops=True, normalize=True, bias=True, spmm=True, **kwargs)[source]
Bases:
MessagePassing
The ClusterGCN graph convolutional operator from the “Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks” paper.
This class implements the ClusterGCN layer, which updates node representations by aggregating information from neighboring nodes, adding lambda times of embedding itself, taking the graph structure into account. It supports both message-passing and sparse matrix multiplication (SPMM) for propagation.
- Parameters:
in_channels (int) – Size of each input sample (number of input features per node).
out_channels (int) – Size of each output sample (number of output features per node).
diag_lambda (float) – Diagonal enhancement value.
improved (bool, optional) – Improved self loop value (can be ignored, similar function as diag_lambda)
cached (bool, optional) – Caching the processed csr, csc and edge_weight.
add_self_loops (bool optional) – Whether to add self-loops to the input graph.
normalize (bool, optional) – SET TO TRUE FOR NORMALLY FUNCTION.
bias (bool, optional) – If set to True, adds a learnable bias to the output. Default is True.
spmm (bool, optional) – If set to True, uses sparse matrix multiplication (SPMM) for propagation. Default is True.
**kwargs (optional) – Additional arguments for the base Module.
- class jittor_geometric.nn.conv.EGNNConv(feats_dim, pos_dim=3, edge_attr_dim=0, m_dim=16, fourier_features=0, soft_edge=0, norm_feats=False, norm_coors=False, norm_coors_scale_init=0.01, update_feats=True, update_coors=True, dropout=0.0, coor_weights_clamp_value=None, aggr='add', **kwargs)[source]
Bases:
MessagePassing
EGNN implementation using Jittor and Jittor Geometric.
- propagate(edge_index, size=None, **kwargs)[source]
The initial call to start propagating messages. Args: edge_index holds the indices of a general (sparse)
assignment matrix of shape
[N, M]
.- size (tuple, optional) if none, the size will be inferred
and assumed to be quadratic.
- **kwargs: Any additional data which is needed to construct and
aggregate messages, and to update node embeddings.
- class jittor_geometric.nn.conv.EvenNet(K, alpha, spmm=True, **kwargs)[source]
Bases:
Module
EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks <https://arxiv.org/pdf/2205.13892>`_ paper.
This class implements the EvenNet architecture, which improves the robustness of graph neural networks by focusing on even-hop neighbors while ignoring odd-hop neighbors.
- Parameters:
K (int) – Maximum number of hops considered for message passing.
alpha (float) – Parameter controlling the weighting of different hops.
spmm (bool, optional) – If set to True, uses sparse matrix multiplication (SPMM) for propagation. Default is True.
**kwargs (optional) – Additional arguments for the base Module.
- class jittor_geometric.nn.conv.GATConv(in_channels, out_channels, e_num, improved=False, cached=False, add_self_loops=True, normalize=True, bias=True, **kwargs)[source]
Bases:
MessagePassingNts
The graph convolutional operator from the `”Graph Attention Networks” 2018 ICLR _ paper
- edge_forward(x, csc)[source]
EdgeForward is a parameterized function defined on each edge to generate an output message by combining the edge representation with the representations of source and destination.
- Return type:
Var
- scatter_to_edge(x, csc)[source]
ScatterToEdge is an edge message generating operation t hat scatters the source and destination representations to edges for the EdgeForward computation
- Return type:
Var
- class jittor_geometric.nn.conv.GCN2Conv(in_channels, out_channels, cached=False, add_self_loops=True, spmm=False, **kwargs)[source]
Bases:
MessagePassing
The graph convolutional operator with initial residual connections and identity mapping (GCNII) from the “Simple and Deep Graph Convolutional Networks” paper.
This class implements the GCNII layer, which combines initial residual connections and identity mapping to enable deeper graph convolutional networks without oversmoothing. The layer supports both message-passing and sparse matrix multiplication (SPMM) for efficient propagation.
Mathematical Formulation: .. math:
\mathbf{H}^{(l)} = (1 - \beta) \big( (1 - \alpha) \mathbf{H}^{(l-1)} + \alpha \mathbf{H}^{(0)} \big) + \beta \big( \mathbf{\Theta}_1 \mathbf{H}^{(l-1)} + \mathbf{\Theta}_2 \mathbf{H}^{(0)} \big)
where: - \(\mathbf{H}^{(l)}\) is the node feature matrix at layer \(l\). - \(\mathbf{H}^{(0)}\) is the initial node feature matrix. - \(\mathbf{\Theta}_1\) and \(\mathbf{\Theta}_2\) are learnable weight matrices. - \(\alpha\) controls the strength of the initial residual connection. - \(\beta\) balances feature aggregation and transformation.
- Parameters:
in_channels (int) – Number of input features per node.
out_channels (int) – Number of output features per node.
cached (bool, optional) – If set to True, caches the normalized edge indices. Default is False.
add_self_loops (bool, optional) – If set to True, adds self-loops to the input graph. Default is True.
spmm (bool, optional) – If set to True, uses sparse matrix multiplication (SPMM) for propagation. Default is False.
**kwargs (optional) – Additional arguments for the base MessagePassing class.
- class jittor_geometric.nn.conv.GCNConv(in_channels, out_channels, bias=True, spmm=True, **kwargs)[source]
Bases:
Module
The graph convolutional operator from the “Semi-supervised Classification with Graph Convolutional Networks” paper.
This class implements the Graph Convolutional Network (GCN) layer, which updates node representations by aggregating information from neighboring nodes, taking the graph structure into account. It supports both message-passing and sparse matrix multiplication (SPMM) for propagation.
- Parameters:
in_channels (int) – Size of each input sample (number of input features per node).
out_channels (int) – Size of each output sample (number of output features per node).
bias (bool, optional) – If set to True, adds a learnable bias to the output. Default is True.
spmm (bool, optional) – If set to True, uses sparse matrix multiplication (SPMM) for propagation. Default is True.
**kwargs (optional) – Additional arguments for the base Module.
Example
>>> dataset = MyGraphDataset(root='/path/to/dataset') >>> data = dataset[0] # Access the first graph object >>> conv = GCNConv(in_channels=16, out_channels=32, bias=True, spmm=True) >>> v_num = data.x.shape[0] # Number of nodes >>> edge_index, edge_weight = data.edge_index, data.edge_attr >>> edge_index, edge_weight = gcn_norm(edge_index, edge_weight, v_num, ... improved=False, add_self_loops=True) >>> with jt.no_grad(): ... csc = cootocsc(edge_index, edge_weight, v_num) ... csr = cootocsr(edge_index, edge_weight, v_num) >>> x = jt.random((v_num, 16)) # Randomly initialize node features >>> out = conv(x, csc, csr) # Apply GCN layer
- class jittor_geometric.nn.conv.GPRGNN(K, alpha, Init, spmm=True, **kwargs)[source]
Bases:
Module
The graph propagation operator from the “Adaptive Universal Generalized PageRank Graph Neural Network” paper
Mathematical Formulation: .. math:
\mathbf{Z} = \sum_{k=0}^{K} \alpha_k \mathbf{P}^{k} \mathbf{X}.
- where:
\(\mathbf{X}\) is the input node feature matrix. \(\mathbf{Z}\) is the output node feature matrix. \(\mathbf{P}\) is the normalized adjacency matrix of the graph. \(\alpha_k\) is the parameter for the \(k\)-th order polynomial.
- Parameters:
K (int) – Order of polynomial, or maximum number of hops considered for message passing.
alpha (float) – Parameter controlling the weighting of different hops.
Init (str) – Initialization method for the propagation weights. Possible values are ‘SGC’, ‘PPR’, ‘NPPR’, ‘Random’, ‘WS’.
spmm (bool, optional) – If set to True, uses sparse matrix multiplication (SPMM) for propagation. Default is True.
- class jittor_geometric.nn.conv.MessagePassing(aggr='add', flow='source_to_target', node_dim=-2)[source]
Bases:
Module
- message(x_j)[source]
Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in
edge_index
. This function can take any argument as input which was initially passed topropagate()
. Furthermore, Var passed topropagate()
can be mapped to the respective nodes \(i\) and \(j\) by appending_i
or_j
to the variable name, .e.g.x_i
andx_j
.- Return type:
Var
-
special_args:
Set
[str
] = {'adj_t', 'dim_size', 'edge_index', 'edge_index_i', 'edge_index_j', 'index', 'ptr', 'size', 'size_i', 'size_j'}
- update(inputs)[source]
Updates node embeddings in analogy to \(\gamma_{\mathbf{\Theta}}\) for each node \(i \in \mathcal{V}\). Takes in the output of aggregation as first argument and any argument which was initially passed to
propagate()
.- Return type:
Var
- class jittor_geometric.nn.conv.MessagePassingNts(aggr='add', flow='source_to_target', node_dim=-2)[source]
Bases:
Module
- aggregate_with_weight(x, csc, csr)[source]
Used for GCN demo ,combine ‘scatter_to_edge’ with ‘scatter_to_vertex’
- Return type:
Var
- edge_forward(x)[source]
EdgeForward is a parameterized function defined on each edge to generate an output message by combining the edge representation with the representations of source and destination.
- Return type:
Var
- scatter_to_edge(x)[source]
ScatterToEdge is an edge message generating operation t hat scatters the source and destination representations to edges for the EdgeForward computation
- Return type:
Var
- scatter_to_vertex(x, csc)[source]
Scatter_to_vertex takes incoming edge-associated Vars as input and outputs a new vertex representation for next layer’s computation
- Return type:
Var
-
special_args:
Set
[str
] = {'adj_t', 'dim_size', 'edge_index', 'edge_index_i', 'edge_index_j', 'index', 'ptr', 'size', 'size_i', 'size_j'}
- class jittor_geometric.nn.conv.OptBasisConv(K, n_channels, spmm=True, **kwargs)[source]
Bases:
Module
Graph Neural Networks with Learnable and Optimal Polynomial Bases <https://openreview.net/pdf?id=UjQIoJv927>`_ paper.
This class implements the OptBasisConv architecture, which implicitly utilize the optimal polynomial bases on each channel via three term recurrence propagation. Check Algorithm 4 and Algorithm 5 in the paper for more details.
- Mathematical Formulation:
Please refer to Algorithm 2, 4 and 5 in paper for more details.
- Parameters:
K (int) – Order of polynomial bases.
spmm (bool, optional) – If set to True, uses sparse matrix multiplication (SPMM) for propagation. Default is True.
n_channels (int) – Number of signal channels to be filtered.
**kwargs (optional) – Additional arguments for the base Module.
- class jittor_geometric.nn.conv.SAGEConv(in_channels, out_channels, improved=False, cached=False, add_self_loops=True, normalize=True, project=False, root_weight=True, bias=True, spmm=True, **kwargs)[source]
Bases:
MessagePassing
The GraphSAGE operator from the “Inductive Representation Learning on Large Graphs” paper.
This class implements the GraphSAGE layer, which updates node representations by aggregating information from neighboring nodes, concatenating the transformed embedding itself, taking the graph structure into account. It supports both message-passing and sparse matrix multiplication (SPMM) for propagation.
- Parameters:
in_channels (int) – Size of each input sample (number of input features per node).
out_channels (int) – Size of each output sample (number of output features per node).
improved (bool, optional) – Improved self loop value (can be ignored, similar function as diag_lambda)
cached (bool, optional) – Caching the processed csr, csc and edge_weight.
add_self_loops (bool optional) – Whether to add self-loops to the input graph.
normalize (bool, optional) – SET TO TRUE FOR NORMALLY FUNCTION.
project (bool, optional) – If set to ‘True’, the node feature will be transformed first.
root_weight (bool, optional) – If set to ‘True’, adding the transformed node features to result.
bias (bool, optional) – If set to True, adds a learnable bias to the output. Default is True.
spmm (bool, optional) – If set to True, uses sparse matrix multiplication (SPMM) for propagation. Default is True.
**kwargs (optional) – Additional arguments for the base Module.
- class jittor_geometric.nn.conv.SGConv(in_channels, out_channels, K=1, bias=True, spmm=True, **kwargs)[source]
Bases:
MessagePassing
The simple graph convolutional operator from the “Simplifying Graph Convolutional Networks” paper.
This class implements the Simplified Graph Convolution (SGC) layer, which removes nonlinearities and collapses weight matrices across layers to achieve a simplified and computationally efficient graph convolution.
Mathematical Formulation: .. math:
\mathbf{X}^{\prime} = {\left(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} \right)}^K \mathbf{X} \mathbf{\Theta},
where: - \(\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}\) denotes the adjacency matrix with added self-loops. - \(\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}\) is its diagonal degree matrix. - \(K\) controls the number of propagation steps. - The adjacency matrix can include other values than
1
, representing edge weights via the optional edge_weight variable.- Parameters:
in_channels (int) – Number of input features per node.
out_channels (int) – Number of output features per node.
K (int, optional) – Number of propagation steps. Default is 1.
bias (bool, optional) – Whether to include a learnable bias term. Default is True.
**kwargs (optional) – Additional arguments for the MessagePassing class.
- class jittor_geometric.nn.conv.TransformerConv(in_channels, out_channels, heads=1, concat=True, beta=False, dropout=0.0, edge_dim=None, bias=True, root_weight=True, **kwargs)
Bases:
MessagePassing
The graph transformer operator from the “Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification” paper
\[\mathbf{x}^{\prime}_i = \mathbf{W}_1 \mathbf{x}_i + \sum_{j \in \mathcal{N}(i)} \alpha_{i,j} \mathbf{W}_2 \mathbf{x}_{j},\]where the attention coefficients \(\alpha_{i,j}\) are computed via multi-head dot product attention:
\[\alpha_{i,j} = \textrm{softmax} \left( \frac{(\mathbf{W}_3\mathbf{x}_i)^{\top} (\mathbf{W}_4\mathbf{x}_j)} {\sqrt{d}} \right)\]- Parameters:
in_channels (int or tuple) – Size of each input sample, or
-1
to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.out_channels (int) – Size of each output sample.
heads (int, optional) – Number of multi-head-attentions. (default:
1
)concat (bool, optional) – If set to
False
, the multi-head attentions are averaged instead of concatenated. (default:True
)beta (bool, optional) –
If set, will combine aggregation and skip information via
\[\mathbf{x}^{\prime}_i = \beta_i \mathbf{W}_1 \mathbf{x}_i + (1 - \beta_i) \underbrace{\left(\sum_{j \in \mathcal{N}(i)} \alpha_{i,j} \mathbf{W}_2 \vec{x}_j \right)}_{=\mathbf{m}_i}\]with \(\beta_i = \textrm{sigmoid}(\mathbf{w}_5^{\top} [ \mathbf{W}_1 \mathbf{x}_i, \mathbf{m}_i, \mathbf{W}_1 \mathbf{x}_i - \mathbf{m}_i ])\) (default:
False
)dropout (float, optional) – Dropout probability of the normalized attention coefficients which exposes each node to a stochastically sampled neighborhood during training. (default:
0
)edge_dim (int, optional) –
Edge feature dimensionality (in case there are any). Edge features are added to the keys after linear transformation, that is, prior to computing the attention dot product. They are also added to final values after the same linear transformation. The model is:
\[\mathbf{x}^{\prime}_i = \mathbf{W}_1 \mathbf{x}_i + \sum_{j \in \mathcal{N}(i)} \alpha_{i,j} \left( \mathbf{W}_2 \mathbf{x}_{j} + \mathbf{W}_6 \mathbf{e}_{ij} \right),\]where the attention coefficients \(\alpha_{i,j}\) are now computed via:
\[\alpha_{i,j} = \textrm{softmax} \left( \frac{(\mathbf{W}_3\mathbf{x}_i)^{\top} (\mathbf{W}_4\mathbf{x}_j + \mathbf{W}_6 \mathbf{e}_{ij})} {\sqrt{d}} \right)\](default
None
)bias (bool, optional) – If set to
False
, the layer will not learn an additive bias. (default:True
)root_weight (bool, optional) – If set to
False
, the layer will not add the transformed root node features to the output and the optionbeta
is set toFalse
. (default:True
)**kwargs (optional) – Additional arguments of
jittor_geometric.nn.conv.MessagePassing
.
- execute(x, edge_index, edge_attr=None, return_attention_weights=None)
Runs the forward pass of the module.
- Parameters:
return_attention_weights (bool, optional) – If set to
True
, will additionally return the tuple(edge_index, attention_weights)
, holding the computed attention weights for each edge. (default:None
)
- forward(x, edge_index, edge_attr=None, return_attention_weights=None)
Runs the forward pass of the module.
- Parameters:
return_attention_weights (bool, optional) – If set to
True
, will additionally return the tuple(edge_index, attention_weights)
, holding the computed attention weights for each edge. (default:None
)
- message(query_i, key_j, value_j, edge_attr, index, ptr, size_i)
Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in
edge_index
. This function can take any argument as input which was initially passed topropagate()
. Furthermore, Var passed topropagate()
can be mapped to the respective nodes \(i\) and \(j\) by appending_i
or_j
to the variable name, .e.g.x_i
andx_j
.- Return type:
Var
- reset_parameters()