Skip to main content
. Author manuscript; available in PMC: 2021 Mar 25.
Published in final edited form as: Cell Syst. 2020 Mar 4;10(3):265–274.e11. doi: 10.1016/j.cels.2020.02.003
Algorithm Methodology Parameters Worst-case Complexity
N: the number of samples;
d: the dimension of the X and Y manifolds (default 2);
k: the number of nearest neighbors
L: the number of conditioning genes
I: the dimension of the features data
CCM Determining the causality from X to Y based on how well one can reconstruct the cross-mapped estimate of X from the nearest neighbors determined on Y space E: The number of lags embedded in the shadow manifold
Tau: The time lag between each consecutive pair of time samples (default: 1)
O (2EN log N) *+ O (2(E + 1) N) **
*Complexity of kd-tree algorithm for kNN search
** Complexity of regression and weight estimation
Granger Causality Determining the causality from X to Y based on how much the past samples of X contribute in linearly estimating the current state of Y, compared to when the Y is estimated based merely upon its own past Maxlag: The number of lags of the past sample included in estimating the current state of Y O (IN + 2I2 N + I3) *
* The complexity of linear regression
RDI and cRDI Determining the causality from X to Y based on the amount of mutual information between the past of X and the current state of Y conditioned over the past of (potentially) all other variables than X k: The number of neighbors for kNN estimation of mutual information
d: The lags for which the mutual information from the lagged source to the current state of target is estimated.
L: The number of the conditioning nodes other than X and Y. While small L’s can result in false positives since we won’t filter out confounding and/or intermediate factors, too large L’s will result in curse of dimensionality in smaller sample set regimes and increasing the computational complexity in larger sample set regimes.
O ((d + L + 1) N log N) * + O (kN) **
*Complexity of kd-tree algorithm
**Complexity of inquiry of each neighbor
uRDI and ucRDI Same as RDI method, but including the replacement of the empirical distribution of the past samples with a uniform distribution All Parameters from RDI plus: BW: The bandwidth of the kernel estimator O ((d + L + 1) N log N) * + O (kN) ** + O (N3) ***
*Complexity of kd-tree algorithm
**Complexity of inquiry of each neighbor
***Complexity of kernel density estimation