Neural Complexity [11] (1994) |
Sum of average mutual information for all bipartitions of the system. |
Strong |
Process |
Yes |
Low |
|
|
Causal Density [12] (2003) |
“A measure of causal interactivity that captures dynamical heterogeneity among network elements (differentiation) as well as their global dynamical integration [12].” |
Strong |
Process |
Yes |
Low |
Calculated by applying “Granger causality”. |
|
Φ (IIT 1.0) [9] (2004) |
It is the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements. |
Medium |
Capacity |
Yes |
Low |
Provided the hypothesis for “Information Integrated Theory of Consciousness.” Applicable only to stationary systems. |
|
φ (IIT 2.0) [19], [20], [42] (2008) |
Measure of the information generated by a system when it transitions to one particular state out of a repertoire of possible states, to the extent that this information (generated by the whole system) is over and above the information generated independently by the parts. |
Strong |
Capacity |
Yes |
Low |
Extension of IIT 1.0 to discrete dynamical systems. |
|
ΦE and ΦAR[21] (2011) |
Rather than measuring information generated by transitions from a hypothetical maximum entropy past state, ΦE instead utilizes the actual distribution of the past state. “ΦAR can be understood as a measure of the extent to which the present global state of the system predicts the past global state of the system, as compared to predictions based on the most informative decomposition of the system into its component parts [21].” |
Strong |
Process |
No |
Medium |
ΦE is applicable to both discrete and continuous systems with either Markovian or non-Markovian dynamics. ΦAR is same as ΦE for Gaussian systems [21]. ΦE and ΦAR fail to satisfy upper and lower bounds of integrated information [14]. However, the authors propose variants of these measures which are well bounded. |
|
PCI [6] (2013) |
“The normalized Lempel–Ziv complexity of the spatiotemporal pattern of cortical activation triggered by a direct Transcranial Magnetic Stimulation (TMS) perturbation [6].” |
Weak |
Process |
Unknown |
High |
While PCI proves to be a reasonable objective measure of consciousness in healthy individuals during wakefulness, sleep and anesthesia, as well as in patients who had emerged from coma, it lacks solid theoretical connections to integrated information theories. |
ΦMax (IIT 3.0) [4], [5] (2012–14) |
Measure of the Information that is specified by a system that is irreducible to that specified by its parts. “It is calculated as the distance between the conceptual structure specified by the intact system and that specified by its minimum information partition [43].” |
Strong |
Capacity |
Yes |
Low |
IIT 3.0 introduces major changes over IIT 2.0 and IIT 1.0: (i) considers how mechanisms in a state constrain both the past and the future of a system; (ii) emphasis on “a difference that makes a difference”, and not simply “a difference”, (iii) concept has proper metric – Earth Mover's Distance (EMD) [5]. Limitations: Current-state Dependency, Computational intractability, Inability to handle continuous neurophysiological data. |
|
ψ[22] (2014) |
ψ is a principled infotheoretic measure of irreducibility to disjoint parts, derived using Partial Information Decomposition (PID), that resides purely within Shannon Information Theory. |
Medium |
Capacity |
No |
Low |
ψ compares to φ (IIT 2.0) instead of ΦMax (IIT 3.0). Address the three major limitations of ϕ in [20]: State-dependency and entropy; issues with duplicate computation and mismatch of the intuition of “cooperation by diverse parts” [22]. Has desirable properties such as not needing a MIP normalization and being substantially faster to compute. |
|
Φ⁎[14] (2016) |
“It represents the difference between “actual” and “hypothetical” mutual information between the past and present states of the system.” It is computed using the idea of mismatched decoding developed from information theory [14]. |
Strong |
Capacity |
Yes |
Medium |
Emphasis on theoretical requirements: First, the amount of integrated information should not be negative. Second, the amount of integrated information should never exceed information generated by the whole system. Focuses on IIT 2.0, rather IIT 3.0. |
and [23] (2016) |
Introduction of Maximum Modularity Partition (MMP), which is quicker than MIP to compute the integrated information for larger networks. In combination with Φ⁎ and ΦAR, MMP yields two new measures and . |
Strong |
Capacity (), Process () |
Yes (), No () |
Medium |
The new measures are compared with Φ⁎, ΦAR and Causal Density and based on the idea that human brain has modular organization in its anatomy and functional architecture. Calculating Integrated Information across MMP reflects underlying functional architecture of neural networks. |
|
ΦC (this paper) |
The maximally-aggregate differential normalized Lempel–Ziv (LZ) or normalized Effort-To-Compress (ETC) complexity for the time series data of each node of a network, generated by maximum entropy and zero entropy perturbations of each possible atomic bipartition of an N-node network. |
High |
Both |
Low |
Medium |
Bridges the gap between theoretical and empirical approaches for computing brain complexity. Based on the idea that brain behaves as an integrated system and acknowledging the similarity between compressionism and integrated information, ΦC is based on compression-complexity measures and not infotheoretic measures. |