Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jul 1.
Published in final edited form as: Bioessays. 2020 Jun 2;42(7):e1900135. doi: 10.1002/bies.201900135

Rethinking Causation for Data-intensive Biology: Constraints, Cancellations, and Quantized Organisms

Douglas E Brash 1
PMCID: PMC7518294  NIHMSID: NIHMS1598549  PMID: 32484248

Abstract

Complex organisms thwart our simple rectilinear causality paradigm of "necessary and sufficient", with its experimental strategy of "knock down and overexpress". This Essay organizes the eccentricities of biology into four categories that call for new mathematical approaches; recaps for the biologist the philosopher's recent refinements to the causation concept and the mathematician's computational tools that handle some but not all of the biological eccentricities; and describes overlooked insights that make causal properties of physical hierarchies such as emergence and downward causation straightforward. Reviewing and extrapolating from similar situations in physics, I suggest that new mathematical tools incorporating feedback, signal cancellation, nonlinear dependencies, physical hierarchies, and fixed constraints rather than instigative changes will reveal unconventional biological behaviors. These include "eigenisms", organisms that are limited to quantized states; trajectories that steer a system such as an evolving species toward optimal states; and medical control via distributed "sheets" rather than single control points.

Keywords: causation, driver, constraint, feedback, hierarchy, emergence, quantization

1. Introduction

Twenty-first century biology provides more sophisticated examples of causality than those available to Aristotle and Hume. Several phenomena in physics are not far behind. It is no longer obvious what we want to know when we ask "What caused this tissue to become a heart, this cell to become a tumor, or this signaling pathway to become pro-apoptotic?". The explanation of why Z happened rather than something else may not fall into the form "This protein phosphorylated that protein at this amino acid". As a test for causation, "necessary and sufficient" has run its course in biology. Embodied experimentally as "knock down and overexpress", this proxy has led biologists to impose rectilinear, unidirectional, unilevel, billiard-ball models on biological systems that are neither linearly ordered nor unidirectional and that operate at multiple levels of hierarchical organization. Physicists have wrestled sporadically with the same issues in the form of thermodynamics, Mach's Principle, quantum entanglement, quantum electrodynamics, and the Josephson effect.[1-8] In both realms, the conundrum is acute. I first unpack the problem through the eye of a biologist, identifying conceptual obstacles to our mission of understanding causal events in biology, and then present specific biological examples that guide us toward promising solutions. The way forward is impeded at the outset by the hurdles of muddled concepts that we biologists tend to use.

2. Splitting Causation into Parts is a First Step toward Characterizing Complex Systems

The first hurdle is that the word "cause" conflates instigation with determination: why anything happened at all versus the rules that mould the outcome. The cancer biologist's "driver gene" is worse, conflating continued pushing with continued steering. Muddling these lets our search trip over determinants without recognizing them as causal. I will deem "cause" to have two parts, the instigator and the determinants. The second hurdle is that our colloquial idea of causality focuses on the instigator – a billiard ball or hormone that initiates an altered behavior of some other object. This is Newton's Laws physics, in which forces exerted by an initiating object alter the behavior of a second object in a way largely determined by the first. "Necessary" has never been a good candidate for this causal link because alternative routes to the endpoint are usually available. Nor is "sufficient", because even billiard balls require a level billiard table – a determinant. Recently these points have been put on a rigorous footing by the interventionist view of causality,[9] which focuses on whether a change ΔA in a putative cause creates an effect ΔZ when other variables are held fixed (see Box 1).

Box 1. The Interventionist Definition of Causation.

We tend to think of a cause as an event sufficient to produce the effect. But this conception doesn't handle the case of A ⇒ Y & Z (panel 1), where observing Y is sufficient to predict the later Z but Y is not a cause of Z. Correlation rather than cause was the contemporary argument against Koch's microbial explanation of tuberculosis, prompting his Postulates for causal testing.[10] 'Sufficient' also fails to recognize the role of the unchanged background determinants such as the level billiard table (Q in panel 2), which are surely part of any explanation that causation is being used for. These defects can be repaired by adding 'necessary', and Hume in fact defined a cause solely as 'necessary'. Yet 'necessary' alone doesn't guarantee an effect because moving a riverbank is moot if there is no water; instigating cue balls and thus 'sufficient' are still needed. 'Necessary and sufficient' was adopted for Lewis' counterfactual definition of 'causally depends on'.[11] Alas, this conception is so tight that it fails to handle an effect that can be achieved by alternative causes, A or B ⇒ Z (panel 2). Mackie repaired this defect with the INUS condition[12]: A cause is an insufficient but necessary part of a (multifactor) condition that is itself unnecessary (due to alternatives) but sufficient for the result. This resembles the biologist's "played a role in" and the physician's "predisposed to". Another benefit of INUS is that the multifactor condition includes both the billiard ball A and the billiard table Q; in the original example a sudden short-circuit and permanent flammable material are both required for one possible route to a housefire. In a biology experiment, knockdown tests "IN" and overexpression tests "US". Nevertheless, INUS does not capture the fact that causes and effects are really changes of objects, ΔA and ΔZ (panel 3).

Box 1

The interventionist framework[9] solves this difficulty and others in a way that will please experimentalists, by explicitly focusing on manipulating A to create ΔA:

Direct cause A of Z with respect to variables like B and Q ≡ There is a possible intervention ΔA that results in ΔZ when all other variables besides A and Z are held fixed at some value by interventions.

The Δ formulation inherently brings in both presence and absence of the putative variable, in contrast to observing correlations between variables that are present. It also allows graded changes. A, B, and Q each satisfy the direct cause condition for some value of the other variables and so are direct causes. The variable that changed in a particular situation (e.g. A in panel 3), I will consider the instigator; those unchanged (e.g. Q) would be determinants. The definition of direct cause works even if A and B in are each sufficient and both change; here the difficulty with identifying 'the cause' is not 'cause' but 'the'.

In the case of multiple multi-step paths (panel 4):

Contributing cause A of Z ≡ There is a directed path from A to Z such that each link Vi in this path is a direct-cause relationship and there is a possible intervention ΔA that results in ΔZ when all other variables that are not on this path are fixed at some value.

The definition of contributing cause works even if parallel paths cancel each other (crossbar in panel 4, lower), leaving only net causation, and works if both Vs and Ws originate from A. It does require that all Vs on the path be enunciated, so that they are not held fixed inadvertently.

Both definitions work not only when A and Z are molecules but also when multiple As are microscopic variables and Z is a macroscopic phenotype. An overexpression experiment achieves ΔA. Knockdown also creates Δ, but it presumes there is an activity to be knocked down and thus an upstream ΔA; knockdown therefore studies ΔV. Essential to the interventionist approach is that each intervention directly changes one and only one variable; interventions cannot behave like panel 1. If we don't know this to be the case in an experiment, then intervention is on a par with correlations between observations.[13]

Including the determinants leaves a deep challenge: it is rarely enough to know ΔA and point-to-point causes like Vs and Ws in panel 4 of Box 1. At its heart, causality is shorthand for a link between an initial state and a final state, each of which contains the same affecting and affected parts A-Z but in different configurations. A change in A's relations to its neighbors (a decrease in momentum, an increase in binding to a hormone receptor, movement of gasoline near a spark) is what causes a change in relations of the affected object. More precisely, these relations of A include the ones it has with the affected "other" object. A characterization of the entire system is needed – an extended version of panel 4. This is essentially the Lagrangian formulation of physics, where the walls, axles, and force fields are included. This formulation, not Newton's focus on forces acting on a single object, is what physicists and engineers use daily; it underpins the mechanics of industrial machinery and the Schrödinger equation.[14, 15] A lesson of physics and engineering is that constraints Q on the object – unchanging conditions such as a wall – are determinants that often are more sculptive of the final event than is the particular instigator. In biology, the premier example is the evolutionary role of selection pressure as a constraint or determinant compared to mutations as instigators.

The third hurdle is that biology has shown us instigators that are not external to the object being altered. A mutant oncogene is not coming from outside the organism. During an embryo's development, each step is the instigator of the next. Of course, the mutation or fertilization had its own cause, but once that occurs it acts as an instigator via the structural change it makes within the organism, rather than by imparting motive forces from without. Unlike an external instigator like the billiard ball, which is effective even if present briefly, a structural instigator must typically be sustained; if it reverts, its effects also revert.

The fourth hurdle is that we humans have two different causation demands on the link between initial and final states; these mainly involve the determinants. The first demand is prediction, the ability to know the final state based solely on the internal parts of the initial state and their interactions. The payoff is that we can then plan around that system's behavior – we predict the behavior of the tiger or poison ivy and then adjust ourselves. Predicting the future from the state's initial ingredients also lets us feel we "understand" the system. The second demand is control, the ability to change the system's final state by altering part of its initial state. In medicine, this ideally occurs at a simple determinant that serves as a control knob; in public health, control focuses on preventing the instigator. This control is also essential in the laboratory, allowing us to separate cause from correlation[16] and prove our predictive understanding is correct. In the interventionalist view[9], an event's ability to control is essential to even defining causality.

We have now cleared our heads about the relationships we are looking for. Does this clarity about point-to-point causal steps resolve the catalog of challenges faced by experimental biologists identifying actual instigators, determinants, and control points?

3. Four Eccentricities of Biology that Challenge Rectilinear Causation

Those challenges spring from an omission in our colloquial picture of causation: in biology, the initial state and the final state are usually separated by intermediate states. Intermediate states break the billiard-ball model in specific ways that point toward a solution.

3.1. Rectilinearity is broken by pathway branching

When a ligand activates a receptor kinase on the cell's surface, the kinase activates more than one downstream next-target, each starting a signaling pathway; each of these then activates several next-targets further downstream. Ultimately, these pathways lead to transcription factors that bind to thousands of genes. Similarly, a neuron's dendrites form synapses with many downstream neurons; in neural pathways that use biogenic amines, axonal neurotransmitter release sites are not even restricted to synapses.[17] Branching also occurs with small molecules: The sunlight energy needed to generate vitamin D also produces other molecules with similar structures;[18] moreover, vitamin D not only acts as a ligand for a receptor that binds to genes, but also directly modifies receptor kinases and ion channels.[19]

This 1-to-Many branching is augmented by Many-to-1 branches: Each receptor kinase or target can usually be activated by alternative upstream ligands or receptors, and each gene is bound by a dozen or more different transcription factors. These often include both excitatory and inhibitory contributions that can partially cancel each other, a computation. Indeed, any computation is essentially a Many-to-1 branch from many operands to one result, so biological computations for insulin secretion amounts, visual tracking of a moving object, or the mental representations that underlie mind will also break rectilinearity.

Superimposed on these within-pathway branches is the fact that many proteins and metabolites have more than one function and so branch onto more than one pathway. The tumor suppressor TP53 is notorious, involved in cell cycle arrest, apoptosis, angiogenesis, DNA repair, melanogenesis, and more; it is even upregulated by vitamin D. Lens crystallins are simply proteins used in another body site that happen to make clear crystals, with the particular co-opted protein differing between species.[20] RAC1, a GTPase that binds to the cytoskeleton and modulates cell motility, is also the regulatory subunit for NADPH oxidase, which generates the superoxide that phagocytes use to kill invading pathogens and also initiates a chemiexcitation reaction that creates sunlight-like DNA damage in the dark.[21] Which of these functions is the reason RAC1 mutations are an instigator for melanoma,[22] and which are determinants? Such dual-use is normal in engineering: the strut that holds an airplane engine onto the wing is important for heat dissipation.

Dense branching makes it difficult for the experimenter to alter one and only one element of the initial state in order to identify "causal" elements. Indeed, there is no longer a guarantee that a pathway even has a control element. An underappreciated strength of studying the genetics of single-gene inherited diseases is that it reveals pathways that do have a control point.

Branching is often a feature, not a bug, of the network. The usual view in biology is that branching provides backup pathways and robustness. A more sophisticated example is given by computer neural networks. Their computations were initially limited to linear relations (in the mathematical sense of y = mx + b), which did not include the logically essential exclusive-OR.[23, 24] The breakthrough came when an intermediate stage with branches was added.[25] Directed acyclic graphs, as in Box 1, are capable of modeling branching but the branches must be included in the model. Biological branching is not ignorable.

3.2. Unidirectionality is broken by feedback

In feedback, the result Z of Box 1 also has an arrow back to the causes A or B. Feedback is ubiquitous in metabolic pathways, signaling networks, and the brain's neural paths. Feedback inhibition of early enzymes in a pathway by the end product is well-known. Conversely, the signaling pathways mediated by JNK or HIF1α have positive feedback loops – they activate genes producing interleukins, TNFα, IGF2, and TGFα, the very ligands that activate these pathways. Blood pressure and glucose levels are regulated by homeostats that compare a setpoint to feedback from physiology sensors; this maintenance of constancy can be generalized to goal-oriented cognitive functions.[26-28] In neuroscience, a similar mechanism is minimization of "surprisal", the Bayesian difference between an organism's current perception and a mental world-model based on experience.[29, 30] One wonders whether Homo sapiens descended not from apes but from homeostats. Loops present in the cell cycle regulatory pathway highlight the fact that it is often unclear who is upstream and who is downstream.[31] TP53 is widely understood to inhibit the cell cycle and activate apoptosis, using its E2F1 downstream target to establish a negative feedback pathway that keeps TP53 in check. Knocking out the gene for TP53 thus knocks out apoptosis. But for UV-induced apoptosis, additionally knocking out E2F1 in a mouse counterintuitively restores the apoptosis; moreover, an E2F1 knockout alone elevates apoptosis. It is as if E2F1 is the main story, inhibiting apoptosis, and TP53 is now the regulatory addition.[32, 33] Who is upstream and who is downstream? Who is the instigator and who is a determinant? Moreover, the answer differs for apoptosis induced by dexamethasone,[34] for UV in a mouse knocked out in a different exon of the gene for E2F1,[35] and for UV in human cells.[36] So what was the causal answer we sought?

Feedback designs are definitely a feature, not a bug: Mutually inhibitory feedback loops are the way electrical engineers construct a switch (the "flip-flop") and the way the eye enhances edges. Signal amplification is achieved by positive feedback. In those cases, the intermediate state is just the final state at an earlier time. Neural networks became powerful with the introduction of backpropagation of outputs.[37, 38] Kauffman studied a network of randomly connected lightbulbs, each activated by its input lightbulbs according to a randomly assigned AND or OR rule. The arrangement led to a stable temporal pattern of light flashes if each bulb was connected to exactly 2 others.[39] Few bioinformatics models incorporate feedback loops, both because loops are viewed as tweaks to the main pathway and because they demand intense computational resources. Yet, biology cannot be "approximately understood" by ignoring them. Feedback also presents critical issues for medicine. Positive feedback is a feature of inflammation pathways like JNK; correspondingly, chronic diseases behave as the phenotypic expression of slow, self-amplifying pathways.[40] Heart attacks are an acute problem because of the positive feedback within the blood clotting pathway. However, directed acyclic graphs omit feedback by definition. Biological loops are not ignorable.

3.3. Transience and reversibility are broken by triggers and switches

Molecular biologists tend to think of a cause as transient, after which the receptor kinase signaling, blood sugar level, or neural firing fades to baseline. Developmental biologists are more accustomed to the fact that these and other events can flip switches that cause permanent change. In the extreme case, the role of a "cause" is not to create the change but to render the change irreversible: The genetic assimilation concept of Waddington and Schmalhausen proposed that environmental changes induce altered phenotypes that become petrified by mutations.[41, 42] One wonders whether the role of a mutant oncogene is similarly to act as a pawl for the ratchet of environmentally-induced physiological disturbances.[43] If all the information lies in the determinants, with little input from a Newtonian instigator, then the instigator is a trigger: it determines only when the predetermined event occurs. Triggered biological events, such as hormone binding, resemble logical if-then relations rather than physics. Reversing such states in chronic diseases may require more than transient medical intervention. Less dramatic examples of nonlinear dependencies in biology include thresholds and synergies such as cooperative binding of molecules.

3.4. Unilevel interactions are broken by physical hierarchy

Thermodynamics was the first scientific field to recognize that the effect could occur at a different hierarchical level from its cause: Macroscopic variables like temperature, pressure, work, and "heat flow" could be explained in terms of the statistics of microscopic collisions between molecules.[1, 2] Yet key relationships exist directly between temperature, pressure, work, and heat, derived long before molecules were demonstrated.[44] The thermodynamicist's approach to creating hierarchy was to define a macro variable as an average of a micro variable, which in biology is like grinding tissue in a homogenizer. Yet, this 2-level dichotomy is a suggestive paradigm for the multilevel hierarchies seen in biology, which span subatomic, molecular, macromolecular, signaling network, cellular, cellular network, organ, organ system, mental representation, and sociology levels of organization. A macro level is typically a coordinated collection of identical copies in the micro level, spread over space (Box 2).

Box 2. Physical Hierarchies and Emergence.

A macroscopic level is a coordinated collection of identical copies in the micro level, typically spread over space. The key is "coordinated", which is managed by physical structural relations between the copies, not by set-subset relationships.

Box 2

How well do we understand causality in hierarchies? In the last century, biologists have executed a spectacular job of reductionism – identifying parts and then the parts of those parts. But reductionism comes with the obligation to put the parts back together again.[7] Rejoining requires knowing not just the parts but the relations that join parts at the same level, and knowing which relations create a higher level from the lower one. Biology has been deficient in both regards. Humans have a blindspot for relations that create new levels, perhaps explaining why many fields and computational approaches consider hierarchies to be equivalent to multilevel set-subset relations.[45-47] An illuminating experiment is to ask a friend to name the parts of a chair – s/he will not mention bolts or glue. The job of a Dounce homogenizer or chaotropic salt is precisely to remove these relations in order to dissociate an organism into parts, but we then forget to study and include these relations when thinking about hierarchies. This blindspot has made two deep properties of hierarchies seem mystically opposed to reductionism, rather than companions to it: emergence of novel behaviors and downward causation (upward and downward arrows, discussed in the text).

The word "emergence" correctly captures the creation of novelty at the higher level by using the ingredients of the lower level, but it is often presented as meaning that combining lower-level parts in a sublime way creates a novel function from nothing. A perceptive solution was Pattee's suggestion that the way to get novelty, i.e. an emergent function, is not to create something new but to constrain what was there before[48] (Box 2). An army of ants is a mob of ants each constrained from moving in any direction it wants. An enzyme constrains a pool of reactants that would otherwise participate in many alternative slow, temperature-driven reactions; enzyme substrates instead participate in a single rapid reaction. The macroscopic roundness of a balloon[49] is a consequence of the geometry of individual rubber molecules plus the fact that each molecule is bound to its neighbors at the same acute angle. The crucial ingredient is the large-scale repetition of that microscopic property; the creator of their correlation is the true originator of micro/macro emergence, in this case it is the internal air pressure. In each case, adding a constraint to a system that is doing many things weakly has restricted it to doing a few things well; it is addition by subtraction. The whole is firmly less than the sum of its parts because constraining the lower-level parts has discarded uncorrelated behaviors and left correlated ones that define a higher-level behavior.

Constraints also underlie the Lagrangian formulation of physics mentioned earlier. Although Lagrange presented it as a mathematical trick, the origin was D'Alembert's attempt to adapt the formalism of static constraints into dynamics by introducing a constraint force akin to a wall.[3, 50-52] Essentially, a constraint goes beyond Newton's specification of the state of one object to specify the relation between two objects. A subtlety is that statics only requires the constraint to be in one place – the point where the ladder touches the wall – but D'Alembert's dynamics allows the constraint to be present at each position the moving object will occupy. In physics, an extended constraint is called a "field", like gravity; in biology, it is called the "environment". Another subtlety is that living organisms differ from physics by generating the constraints themselves; their constraints generate their constraints.[53]

In the extreme case of hierarchy, phenomena emerging at the higher level are independent of the details at the lower levels and rely solely on the constraints. In physics, this situation is termed "independence of microscopics".[7, 8] An archetype is the Chladni plate (https://sciencedemonstrations.fas.harvard.edu/presentations/chladni-plates). If we sprinkle a flat plate with sand grains and set it to vibrating, distinctive patterns emerge that depend on the shape of the plate (Figure 1a). A new pattern emerges if we place our thumb on the edge or if the vibration frequency changes (http://dataphys.org/list/chladni-plates). The high level pattern does ultimately result from collisions of sand grains governed by Newton's Laws. These collisions could be computed at the microscopic level, but a) we don't care, because the particular collisions that led to the pattern today will be different on Thursday and b) we lack the computational power. Why do the microscopics disappear? In effect, many of the motions of a sand grain cancel out, as do many of the collisions that acted on that grain. What is left is are the patterns that are consistent with the external constraints, constraints such as the size and edges of the plate and the frequency of vibration. A similar cancellation produces the coherent light of a laser.[54] These "equilibrium explanations"[55] are sometimes considered acausal, but the interventionist definitions remain valid across hierarchical levels. What we seek in order to feel we understand the patterns are not the links between sand grains, the billiard-ball causality, but the rules that link patterns to the constraints that cause them – a new form of causality focusing on determinants like plate shape rather than instigators like the vibration. The experimental challenge lies in determining how to avoid measuring every copy of each micro component.

Figure 1.

Figure 1.

a) Sand grains on vibrating Chladni plates of different shapes epitomize the conundrum facing biology. Vibrations drive the sand grains into quantized patterns via collisions that follow Newton's Laws, but those patterns depend not on the particular collisions but on the plate's size, shape, and vibration frequency. Do we seek causality in the instigators of collisions or the constraints on patterns? Photo courtesy Harvard Natural Sciences Lecture Demonstrations (https://sciencedemonstrations.fas.harvard.edu/presentations/chladni-plates). b) The central point represents a stable attractor in state space of a system governed by nonlinear equations. In a cell, the x and y axes would be physiologic quantities such as redox potential and metabolic rate. If the system's state varies only due to haphazard influences (dotted line), it may eventually encounter the attractor and become stabilized in that state. This might be one of the stable patterns in (a). If states near the attractor are re-oriented toward it, migration becomes directed (solid curve). This happens under conditions described in the text. The circle indicates a stable limit cycle, to which the system migrates if initially positioned at a point outside. Modified from ref.[108]

"Downward causation" is an apparently paradoxical phenomenon in hierarchical systems in which an emergent phenomenon at the higher, macroscopic level instigates changes at a lower level.[56-58] How can an abstract macro concept like voltage affect a concrete object like a protein? Yet, downward causation needn't invoke any new laws of biology (Box 3).

Box 3. Downward Causation.

Macroscopic properties like voltage or pressure emerged when a correlation was established between many identical sites of microscopic properties like ion concentration or macromolecule displacement. Voltage gradients then activate individual ion channels, and mechanical stresses from a bent tissue sheet or cilium influence differentiation of individual cells. Biological computations have even greater opportunity for hierarchy: An important class is symbolic computation,[59] in which the output symbol has the same format at every step or level of computation whether it be 1 and 0, neural spikes, or macromolecules. This uniformity allows the output of a high level of the hierarchy to be easily introduced as an input to a lower level, creating feedback down hierarchical levels. For example, sensory systems receive input from cortical processing centers. What does it mean for the macro level to influence the micro?

First, a higher level is not in a new place. It consists of the same parts but now coordinated in space or time. Even a temporal correlation is usually ultimately a spatial correlation: my body location correlated with a clock's hand position, blood velocity with the conformation of the heart. A higher level looks like a new place because, when discussing it, we often show all the additional parts that are now coordinated, such as many 'copies' of arteries and veins as well as the heart. But they were already there, a fact clearer when we talk about coordinating the identical cells of an epithelium to produce an ion gradient. Moreover, this coordination is enforced by additional parts – bolts and cadherins – that we tend not to include in drawings of the microscopic level.

Three mechanisms seem to account for the several cases cited in the literature[60]. a) 1:1. When two car bumpers collide, they do so molecule by molecule; we sum the result to observe two "bent fenders". Ion transport by the cells of an epithelium is conducted channel by channel; we sum the result to observe "the voltage gradient". b) Many:1. A single molecule in the fender or epithelium can be affected by several members of the macro level constraint, e.g. ions, integrating them. If the macro level constraint is a pattern rather than a single value such as voltage or pressure, this is a non-trivial advance that allows the micro level to respond to patterns. c) 1:Many. In a laser, the emergent coherent light beam interacts with each of the atoms that produce it, functioning as a pacemaker.[54] In each case, the macro events affect micro events by increasing the probability of an individual lower-level event such as a conformation change in a single ion channel. Because the macro events were correlated, the micro events become correlated despite each interaction being itself microscopic. The real causal question is how the correlation was established at the micro level. That, in turn, can be by cancellation of non-conformers, or self-assembly of many molecules, or spread of a change at one site to the whole tissue[61]; the latter is not unlike compressing a mole of gas molecules by pushing the thin handle of a piston. Causal rules that apply only at the higher level[62] are not required.

One downward causation rule has been proposed to be unavoidably required in biology: selection pressure[56]. Yet note that if our ion channel were to diffuse to a different spot in the cell membrane, it would still find itself surrounded by ions; hence the ions qualify for the definition of a field or an environment. And in evolution, an environmental constituent that alters an organism's constituent so as to increase the organism's reproduction rate or survival is simply selection. So Patee's rule for upward emergence and Campbell's selection rule for downward causation are identical – constraints. Neither is "irreducible" and indeed they rely on the microscopic structure.

Can changing a macro state like voltage have causal efficacy on other macro states like current, or are macro changes merely correlations between epiphenomena of micro causal events? A macro piston sets the volume of gas by downward causation (it is a constraint), although the gas molecules could be in many alternative spatial locations at this same volume; these alternative micro states have been termed "equivalence classes".[63] If the piston falls, the micro states will change. A subtle point is that only if all the equivalent micro states in the initial volume's equivalence class are also members of the same final equivalence class, will each of them produce the same new macro volume state.[57] That is, only in this case do we have

aAbBrather thanaAbB

and thus hierarchical causality rather than mere correlation between A and B. A more sophisticated way of achieving macro-macro causality is when the macro level contains a homeostat's setpoint or goal, to which the micro level must adapt.[60, 63] (I will show how such homeostats can arise later.) The setpoint is again a constraint. This property of homeostat arrays would then allow consistent causal influences between, for example, neural representations of the world.

In the end, navigating physical hierarchies involves reductionism followed by a U-turn upward until reaching the components-plus-relations at the desired level of description. Sometimes the new description simplifies the subject by redefining what is relevant; as my biochemistry professor once remarked, "A molecular biologist is someone who thinks adenine is A". In other circumstances new behaviors emerge, yet these are founded on lower-level parts and their relations. And sometimes a second U-turn imposes constraints on a lower level. That U-turn is the origin of the observation that living organisms' constraints generate their constraints[53] and do so via downward constraints that look like control.[48] Constraints at the micro level that are coordinated across space, usually by structure, generated a coordinated function at the higher level. This spatially extended function acted as an environment: a spatially extended constraint on the behavior of a different set of micro level objects whose behaviors in space and time become constrained and coordinated by the environment.

In synopsis, any computation of biological causation is empty unless it incorporates these four eccentricities. Fortunately, they contain the seeds of the way forward to explanation, prediction, and control.

4. What Will the New World Look Like?

Including these eccentricities – particularly signal cancellations and nonlinearity on the background of feedback and physical hierarchies – will open the door to what may be an unorthodox world.

4.1. Forms of Explanation – Eigenisms Emerge from Graded Effects Broken by Quantization

Cancellation of signals.

The cross connections of branched cell signaling networks invite cancellation: five "yes" and two "no", executed by phosphorylations and dephosphorylations, would seem to output three "yes". Diffusible neurotransmitters might be similar. Even identical signals from two sources will cancel, if they are periodic like a sine wave and if there is a time lag between them ("phase shift").[64] A biologists' intuition is that a cause acting through a collection of cross-connected branches will provide graded effects, with greater stability than a single path. But in physics the result is often a set of discrete options, like the Chladni plate patterns or flashing light bulbs. Famously, the energy levels of atoms and molecules have discrete options, alternative solutions to the Schrödinger equation termed "eigenstates". Less famously, Mott wondered why the spherical wave of an alpha particle emitted during radioactive decay appears as a straight track in a cloud chamber.[5] He found that if the Schrödinger equation included all the chamber's gas atoms, the phases of the wave-atom interactions throughout the chamber canceled each other out except in narrow cones along a line originating at the decaying nucleus. The Feynman formulation of quantum electrodynamics adopts a similar view for electromagnetic fields, assuming that all possibilities can occur but most cancel each other out.[4] These situations involve wavelike behaviors that carry phase information as complex numbers, so the mathematics of cells would likely differ.

Nonlinear equations.

An analogous situation arises for particles, in systems whose behavior is described by nonlinear equations (in the mathematical sense). Initial trajectories can be quite different depending on the starting conditions, but in many cases they tend toward the same final state. The multitude of trajectories, the common final state, and the region in which these occur, are compactly described in terms of attractors, limit cycles, and basins of attraction[37, 39, 65-67] (Figure 1b). This formulation has been useful in describing cardiac behavior and gene expression[68, 69], and it is hard to see how the rest of nonlinear biology could avoid having them. Where there are many attractors, the situation resembles eigenstates: not all states can be reached, and what is stable are discrete, widely separated attractors. We tend to think that a cell has an enormous repertoire of behaviors, and that an external stimulus nudges the cell from one to another. But most of the real world is nonlinear, relying on thresholds and synergies, so cell states and embryos are likely to contain attractors and basins, and to respond to stimuli by hopping from one basin to another or by refashioning basins. Embryos do not occasionally make an organ that is part liverish and part kidneyish. A subtle point is that having a stable spot is not enough; a cell would meander through state space until stumbling into the attractor and being trapped there (Figure 1b, dotted line). It is essential that a state near the attractor be redirected toward that attractor. This will happen if the system's nonlinear equations are such that a ≡ d2r/dt2 = k * d/dr f(r)*r^ , where a is the vectorial acceleration, f(r) describes the attractor's influence at distance r, and r^ is the unit-length vector from the attractor to the cell state's position. This is essentially Newton's F = ma for cell states, if k involves properties of the attractor and its cell analogous to charge and mass, i.e., reflecting their propensity for and resistance to change. This geometry can be described as a well in a state space whose z axis is a potential energy that increases with distance from the attractor; the state will be redirected toward a minimum energy. An individual attractor thus acts as a homeostat even if there is no obvious sensor, setpoint, comparator, and effector.[39] Surprisal computations can also lead to attractors.[29] If embryogenesis is a trajectory through basins that achieve the embryo's structures despite perturbances,[58] and if the system of embryo plus environment also has basins, then species needn't meander through state space to evolve a better organism. Fitness wells would explain the feasibility of evolution more perspicaciously than fitness peaks. Although attractors have already been identified in biological systems, my point is that attractors and eigenstates are not just characteristics of a particular system but are examples of the form in which complex biological causation can be described. Including measures of susceptibility to change would upgrade the current kinematics-like description of attractors to predictive dynamics.

For either cancellations or attractors, we might call the organism's discrete states "eigenisms". The number of options may be smallish -- cells, organisms, and ecosystems being limited to self-consistent states in which the various signals do not erase each other. Indeed, a common experience in the field of clinical biomarkers is that about 30 are needed to firmly diagnose a disease state – not 10 and not 1000.[70-72] If a disease state is not a broken organism but an alternative eigenism, 30 variables may suffice to distinguish it from the other eigenisms the organism could have adopted.

Eigenisms then suggest consequences: Are murine cells easier to transform to malignancy than human because they have fewer distinct states, facilitating the switch to forbidden states? Does lower metabolism or slower molecular turnover with age fuse formerly-distinct states, or make hopping easier, leading to chimeric tissues that underlie the impaired homeostasis that constitutes senescence? Do the energy levels that define a molecule's eigenstate have an analog in organismic biology – such as information content[37, 73, 74] or some other analog to the energy, momentum, and position variables of the Schrödinger equation – so that eigenisms are not just different but can be arranged vertically (Figure 2)? If so, particular conformations of the organism would be most susceptible to switching between eigenisms.

Figure 2.

Figure 2.

A potential energy surface for an Eigenism. This figure is actually the potential energy surface for adjacent thymines in DNA, the two horizontal axes being the angle and distance between bases and the z axis being energy. The lower surface is the ground state, the upper surface is the first excited state, and the conical regions are the conical intersections at particular geometries that allow easy transitions between states. In the case of eigenisms, the x and y axes would be physiology parameters such as redox potential and metabolic rate; the z axis would be an analog to energy or information. DNA calculation and illustration by Dr. Lluis Blancafort, Univ. Girona.[109]

4.2. Prediction – Patterns Replace Mechanism

The rules governing pattern creation would seem to be the causal understanding that we sought. A program in which experimental data drives a search for the appropriate mathematical explanation is the following: First find the parts, at which we are skilled, plus the relations between parts. Some of these relations will be constrained by structures we need to discover. Next, a handful of the relations or constraints on them will dictate higher level patterns or states via rules, and changes to the constraints will precipitate changes in the patterns. To identify the relevant relations and constraints, we need to manipulate them and observe the result. In engineering, this is done by imposing specific signals such as sine waves or a clamp and deducing the "transfer function" from the shape of the output. In biology, this could be done by optogenetics coupled with a multitude of real-time probes.[75] To identify the patterns, we initially observe states provided by Nature, by applying big data methods such as clustering single cell measurements according to abstract principal component axes.[76] The variables underlying the principal components would indicate which parts and relations are important. Ideally, particular principal component axes would reveal structures resembling Figure 1b or 2. Those figures depict the system's allowed changes over time, but time series measurements are difficult to come by. Single-cell technologies provide a potential solution because, when the experiment is performed, some cells will be present from each stage of the temporal series. The trajectory's sequence will still need to be determined, but the existence of attractors or energy surfaces will be evident from the snapshots. This analysis would be continued by perturbing the system. In practice, the opposite order is likely to be fruitful: we are in the position of Kepler looking for a pattern to the movement of the planets, and only later Newton finding a law that explains the causal micro level relations. But we have reached the stage of Copernicus, concluding that a new worldview is needed.

The end result may even be laws or principles rather than individual mechanisms. Most biologists are not very confident that compulsory principles of biology exist, yet several higher-level principles have been found.

• The principle of kinetic proofreading. The fidelity of a biochemical reaction can exceed the limit set by the Km if both correct and incorrect reaction products are discarded or delayed but at different rates.[77, 78] This principle has been observed in processes ranging from protein and DNA synthesis to immune surveillance.[79] Because life is the art of creating signals that rise above ambient chemical noise, and Km is chemistry, this is one of life's essential tricks. One might say that nothing in biology makes sense except in the light of kinetic proofreading.

• The principle of closest packing in information space.[73] When a molecule, cell, or ecosystem is forced to switch between distinguishable states, it is making a decision that involves a certain number of bits. To execute that decision, it must dissipate a minimum amount of energy that is related to the number of states available and to the channel capacity of Shannon information theory.

Others principles have also been identified.[80, 81] In physics, laws take the form of conservation – quantities like mass, energy, and momentum that are redistributed but whose amount remains constant. These in turn stem from symmetries in the process being studied,[82] and symmetries seem entirely plausible for biology.[83]

4.3. The New Math – Biology Needs Enhanced Computational Tools

Mathematical tools that could be modified to incorporate the four eccentricities of biology are considered in Box 4.

Box 4. Which mathematics would apply to biology or eigenisms?

Predictivity in physics ultimately came from mathematics. Mechanism was lost in favor of the mathematics of patterns, as lamented a century ago (ref.[84], the original title of which was The Decline of Mechanism). Replacing collisions and hypothetical cogs by mathematical physics revolutionized the power of physics, but with a faded sense of understanding what is going on.[85] The unreasonable effectiveness of mathematics in the physical sciences[86] appears to lie in the fact that each branch of mathematics captures a type of coordinated behavior; billiard-ball collisions are only one type of coordination and capture only one part of physics.[87] The axioms underlying the appropriate biological mathematics ought to reflect either a) a property of biology, such as a minimization principle[28, 74] or b) an abiological consequence of statistics (ref.[88], esp. pp. 625, 627).

In the absence of laws, computational tools come primarily from an alternative to the interventionalist approach that has an even longer tradition: inferring causal relationships in the absence of experimental manipulation.[46, 89-94] This is the usual situation in economics, sociology, and ecology. Termed "path analysis", the approach uses databases of observational data along with systems of model equations to infer the contribution of various paths to the final result. Manipulation of a variable comes from switching between observed values of the variable, and sophisticated quantitative software is available. The prevailing strategy is the directed acyclic graph (DAG),[13, 16, 93, 95, 96] as in Box 1. All of the system's possible micro states are listed in a matrix, in which equivalence classes define the macro states. Transitions between states are represented by DAGs. Causal links are quantified by successively altering each of the possible micro or macro states ("perturbations"), observing the new states, and summing the frequencies of each transition. Biological examples are clearly explained in reference[46]. Capturing data-driven biology this way is limited by our not knowing all possible states of a cell and by the vast space of possible fits to the data.[97] More importantly, these analyses in practice assume linear equations, absence of canceling paths, and acyclic graphs lacking feedback. The case of feedback has occasionally been considered from a mathematical standpoint[13, 91, 98-102] and implementation requires experimental time series data so that the value Z' resulting from ΔA becomes an input to A. dZ/dt can be a variable separate from Z, acting as if from outside the system.[91, 101] These systems can have instabilities that constitute novel states.[101, 103] Yet, investigations have focused on variables related by linear equations. To mirror biology, additional enhancements are apposite: Constraints need to be overt, rather than being implemented as omitted possible states. Different domains of micro coordination need to be specifiable, creating different macro objects. Quantifying causal strength by perturbing all possible states of a matrix should incorporate the distinction between healthy states and the sick states that lie outside the operating range of an engineered system. With these enhancements, Directed Cyclic Graphs (DCG) may capture biologically realistic constraints and cancellations. This matrix approach may then lead to a Lagrangian analytical description. A potential benefit is that the corresponding "Lagrangian multipliers" not only solve the intractable Newtonian equations arising when many particles are each acted on by many forces, but also reveal the hidden constraint forces.[52] Path analysis designed for observational data can be applied to experimental data.[94, 104] Ideally the biological data reviewed here and in accompanying papers provide convincing reasons to revisit Directed Cyclic Graphs of nonlinear, cross-canceling, hierarchical processes. This effort may reveal a terra nova beyond.

4.4. Control – Replacing Control Knobs with Rudders and Fixed Constraints

Experience gives us the intuition that a control point, such as a volume control, is a small part of the system that changes a higher-level function of the entire system. Hence it is itself intrinsically micro. But our intuition comes largely from engineered systems like cars and radios, which result from engineers working hard to rig the system to have this localized behavior. The Wright brothers steered their airplane by warping the entire wing to alter its lift asymmetrically; Curtis' contribution was the aileron, a small flap within the wing that achieved the same end. Evolution has localized some controls, such as hypothalamic nuclei, but controlling complicated systems like weather or a cell is likely to require a different approach (see Box 5).

Box 5. Control.

A priori, the branched networks, feedback loops, and multifunctional proteins result in a system that has few localized control knobs, or none. Consequently, the experimentalist may never be able to prove how the wiring diagram of the entire system works. Nevertheless, organisms do change in distinct and repeatable ways during embryogenesis, during cell differentiation in the adult, and in response to environmental stimuli.[42] How can the embryo or the experimenter control specific properties of an object consisting of networks, loops, and hierarchies? Is it the case that a control often must be non-localized – a motive source or constraint distributed over space or time? Do we prefer to control the instigator or determinant? Key properties of controls include:

• A control is by definition a new constraint that does not create an emergent property. (Kaleidoscopes are stunning because their control ring breaks this rule.) Nor does a control break what is already there. It simply changes a single pre-existing property of the system. Though seemingly powerful, a control is therefore a local and weak constraint. The engineering profession specializes in creating discrete modules that can be manipulated by such weak-constraint dials.

• A control knob is by definition irreversible. A volume control stays where it is put, without sliding back to zero. In contrast, a rudder is reversible; when the hand leaves the tiller, the sea straightens the rudder. In biology, metabolic manipulations are more like rudders. Yet differentiation is stable, so developmental biology contains lessons for medicine.

• A control knob or rudder is not itself an instigator of change. It is the opposite – a new constraint held constant. A radio's station frequency selector and even the on-off switch alter the outcome of the electrons moving through the wires.

For a hierarchical eigenism, the Chladni plate analog is instructive about how to exert control via constraints. The instigator for the sand grains was the vibrating plate, which generated micro level changes amongst the sand grains and without which they all would have remained at rest. The determinants for the sand grain pattern resided in the plate's shape, stiffness, and resonant frequency, which are macro level constraints on the sand grains; they resemble structure, selection pressure, or information rather than power. Notice also that it takes time for the patterns to emerge. It therefore appears that reductionism and parts lists have not bought us what we need to exert control in biology. We instead need to understand the constraints. These behave like Aristotle's final cause, not out of mysterious "intent" but simply because constraints dictate the achievable stable state; events will trend toward that state.

Perhaps Nature has already figured this out. If a particular biological system has no local control point, does it have a constraint that is a de-localized control – a control "sheet"? Does control require sustained modification of multiple initial micro events? Examples come to mind of biological phenomena that are disparaged as "non-specific" but might be better characterized as de-localized constraints:

  • Redox control of signaling and metabolism that alters entire ensembles of proteins, for example by modifying the extremely redox-sensitive active site of phosphatases[105]

  • MicroRNAs, each of which targets hundreds of mRNAs

  • Piwi RNAs, which bind proteins and are the largest family of small non-coding RNAs

  • Stress-induced alternative mRNA splicing, which affects dozens of genes[106]

  • Electric and gravitational fields, affecting differentiation of sheets of cells[107].

In medicine, a very real implication is that restoring a tissue's state to its normal condition may require oligo-target drugs delivered for sustained lengths of time, long enough for the system's parts to be herded back to the desired state.

5. Conclusions and Outlook

Rethinking causation in complex organisms has led us through three layers of the problem: What did we want to know? What is the form of a possible answer? How will we identify those causes experimentally? Insights emerged that suggest signals to watch for in single-cell or embryo experiments: Emergence of novel properties comes from constraints rather than augmentations; microscopic components correlated by structural constraints produce a macroscopic environment that constrains different micro components. On this hierarchical background, pathway divergence, convergence, and feedback are ubiquitous and produce signal cancellations that, in physics, create quantized behaviors; in biology these interactions may restrict an organism to occupying one of a small set of "eigenism" states. The nonlinear equations describing such interactions can, like physical forces, specify trajectories that guide the organism toward optimal states rather than randomly searching state space. These broad patterns of activity seem closer to what we really wanted to know than the underlying sequential activation of instigators like hormones, neurotransmitters, and protein phosphorylations. Biologists' experimental designs might then shift from point-source instigators of single pathways to behavior of the entire system including rigid constraints; from molecular events to event patterns that define quantized eigenisms; and from single control points to constraints imposed on multiple parts of the organism, sustained over time. This shift implies a new form for both scientific understanding and medical control.

Experimental questions we can ask include: What is the catalog of an organism's eigenisms? Can the Lagrangian operator approach to constraints be generalized to biological interactions? When is a system's control knob distributed rather than local? What are the time constants for switching the organism between states? What simplifications and macroscopic variables let us discover approximate solutions? Answering these questions will require data-intensive experimental tools, amenable organisms, bio-consistent mathematics, and much reflection by biologists.

Acknowledgments

I am grateful for perceptive conversations with Drs. Heinz von Foerster, Karl Kornacker, Philip Perlman, G. Robert Taylor, Edward Behrman, Robert Rosen, John Cairns, Harold Morowitz, and Stephen Stearns, as well as suggestions by anonymous reviewers. Writing was supported by National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS) grant 1R01AR070851 and National Cancer Institute (NCI) grant 2P50CA121974.

Footnotes

Conflict of Interest

The author declares no conflict of interest.

References

  • [1].Gibbs JW, The Scientific Papers of J. Willard Gibbs, Ox Bow Press, Woodbridge, CT: 1993. [Google Scholar]
  • [2].Reif F, Fundamentals of Statistical and Thermal Physics, Levant Books, Kolkata, India: 2010. [Google Scholar]
  • [3].Mach E, The Science of Mechanics, Open Court Publishing, LaSalle, IL: 1942. [Google Scholar]
  • [4].Feynman RP, QED: The Strange Theory of Light and Matter, Princeton University Press, Princeton, NJ: 1985. [Google Scholar]
  • [5].Mott NF, Proc. Royal Soc. A 1929, 126, 79. [Google Scholar]
  • [6].Einstein A, Podolsky B, Rosen N, Phys. Rev 1935, 47, 777. [Google Scholar]
  • [7].Anderson PW, Science 1972, 177, 393. [DOI] [PubMed] [Google Scholar]
  • [8].Laughlin RB, Pines D, Proc. Natl. Acad. Sci. USA 2000, 97, 28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Woodward JF, Making things happen: a theory of causal explanation, Oxford University Press, New York: 2003. [Google Scholar]
  • [10].Ross LN, Woodward JF, Stud Hist Philos Biol Biomed Sci 2016, 59, 35. [DOI] [PubMed] [Google Scholar]
  • [11].Lewis DK, Counterfactuals, Harvard University Press, Cambridge, MA: 1973. [Google Scholar]
  • [12].Mackie JL, Am. Philosoph. Quarterly 1965, 2, 245. [Google Scholar]
  • [13].Spirtes P, Glymour C, Scheines R, Causation, Prediction, and Search, MIT Press, Cambridge, MA: 2000. [Google Scholar]
  • [14].Lanczos C, The Variational Principles of Mechanics, Dover, New York: 1970. [Google Scholar]
  • [15].Goldstein H, Poole C, Safko J, Classical Mechanics, Addison Wesley, San Francisco: 2002. [Google Scholar]
  • [16].Albantakis L, Marshall W, Hoel E, Tononi G, arXiv:1708.06716 2017. [Google Scholar]
  • [17].DIsmukes K, Nature 1977, 269, 557. [Google Scholar]
  • [18].Wacker M, Holick MF, Dermato-Endorinol. 2013, 5, 51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Bikle DD, Chem. Biol 2014, 21, 319. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Piatigorsky J, Wistow G, Science 1991, 252, 1078. [DOI] [PubMed] [Google Scholar]
  • [21].Premi S, Wallisch S, Mano CM, Weiner AB, Bacchiocchi A, Wakamatsu K, Bechara EJ, Halaban R, Douki T, Brash DE, Science 2015, 347, 842. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Krauthammer M, Kong Y, Ha BH, Evans P, Bacchiocchi A, McCusker JP, Cheng E, Davis MJ, Goh G, Choi M, Ariyan S, Narayan D, Dutton-Regester K, Capatana A, Holman EC, Bosenberg M, Sznol M, Kluger HM, Brash DE, Stern DF, Materin MA, Lo RS, Mane S, Ma S, Kidd KK, Hayward NK, Lifton RP, Schlessinger J, Boggon TJ, Halaban R, Nature genetics 2012, 44, 1006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Rosenblatt F, Psychol. Rev 1958, 65, 386. [DOI] [PubMed] [Google Scholar]
  • [24].Minsky M, Papert S, Perceptrons: An Introduction to Computational Geometry, MIT Press, Cambridge, MA: 2017. [Google Scholar]
  • [25].Werbos PJ, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences, Harvard University Press, Cambridge, MA: 1975. [Google Scholar]
  • [26].Cannon WB, The Wisdom of the Body, W. W. Norton, New York: 1939. [Google Scholar]
  • [27].Ashby WR, Design for A Brain, Chapman & Hall, London: ??? 1960. [Google Scholar]
  • [28].Maturana HR, Varela FJ, Autopoiesis and Cognition: The Realization of the Living, D. Reidel Publishing Co., Boston: 1980. [Google Scholar]
  • [29].Friston K, Ao P, Comp. Math. Meth. Med 2012, 2012, Article ID 937860. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Bogacz R, J. Math. Psychol 2017, 76, 198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [31].Knezevic D, Brash DE, Cell Cycle 2004. 3, 729. [PubMed] [Google Scholar]
  • [32].Wikonkal NM, Remenyik E, Knezevic D, Zhang W, Liu M, Zhou H, Berton TR, Johnson DG, Brash DE, Nature Cell Biol. 2003, 5, 655. [DOI] [PubMed] [Google Scholar]
  • [33].Knezevic D, Zhang W, Rochette P, Brash DE, Proc. Natl. Acad. Sci. USA 2007, 104, 11286. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [34].Lowe SW, Schmitt EM, Smith SW, Osborne BA, Jacks T, Nature 1993, 362, 847. [DOI] [PubMed] [Google Scholar]
  • [35].Wloga EH, Criniti V, Yamasaki L, Bronson RT, Nat. Cell Biol 2004, 6, 565. [DOI] [PubMed] [Google Scholar]
  • [36].Chaturvedi V, Sitailo LA, Qin JZ, Bodner B, Denning MF, Curry J, Zhang W, Brash D, Nickoloff BJ, Oncogene 2005, 24, 5299. [DOI] [PubMed] [Google Scholar]
  • [37].Hopfield JJ, Proc Natl Acad Sci U S A 1982, 79, 2554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Rumelhart DE, McClelland J, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, MIT Press, Cambridge, MA: 1986. [DOI] [PubMed] [Google Scholar]
  • [39].Kauffman S, At Home in the Universe: the search for the laws of self-organization and complexity, Oxford University Press, New York: 1995. [Google Scholar]
  • [40].Brash DE, Goncalves LCP, Bechara EJH, G. Excited-State Medicine Working, Trends Mol Med 2018, 24, 527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].Waddington CH, Nature 1942, 150, 563. [Google Scholar]
  • [42].Schmalhausen II, Factors of evolution : the theory of stabilizing selection, University of Chicago Press, Chicago: 1986. [Google Scholar]
  • [43].Brash DE, eLife 2019, 8, e45809. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [44].Perrin J, Atoms, Ox Bow Press, Woodbridge, CT: 1990. [Google Scholar]
  • [45].Heisler IL, Damuth J, Am. Natural 1987, 130, 582. [Google Scholar]
  • [46].Shipley B, Cause and Correlation in Biology, Cambridge Univ. Press, Cambridge: 2000. [Google Scholar]
  • [47].Shipley B, Ecology 2009, 90, 363. [DOI] [PubMed] [Google Scholar]
  • [48].Pattee H, in Hierarchical Theory: The Challenge of Complex Systems, (Ed: Patee H), George Braziller, New York: 1973, 71. [Google Scholar]
  • [49].Montévil M, Pocheville A, Organisms. J. Biolog. Sci 2017, 1, 37. [Google Scholar]
  • [50].Jacobi CGJ, Jacobi's Lectures on Dynamics, Hindustan Book Agency, New Delhi, India: 2009. [Google Scholar]
  • [51].Lindsay RB, Margenau H, Foundations of Physics, Ox Bow Press, Woodbridge, CT: 1981. [Google Scholar]
  • [52].Butterfield J, arXiv:physics/0409030 2004. [Google Scholar]
  • [53].Mossio M, Montevil M, Longo G, Prog Biophys Mol Biol 2016, 122, 24. [DOI] [PubMed] [Google Scholar]
  • [54].Haken H, Wunderlin A, Yigitbasi S, Open Syst. Inform. Dynam 1995, 3, 97. [Google Scholar]
  • [55].Sober E, Philosoph. Studies 1983, 43, 201. [Google Scholar]
  • [56].Campbell DT, in Studies in the Philosophy of Biology, (Eds: Ayala FJ, Dobzhansky T), University of California Press, Berkeley, CA: 1974, 179. [Google Scholar]
  • [57].Ellis GFR, in The Re-emergence of Emergence, (Eds: Clayton P, Davies PCW), Oxford University Press, Oxford, UK: 2006, 79. [Google Scholar]
  • [58].Pezzulo G, Levin M, J R Soc Interface 2016, 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [59].Harnad S, Physica D 1990, 42, 335. [Google Scholar]
  • [60].Ellis GFR, Interface Focus 2012, 2, 126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [61].Odell GM, Oster G, Alberch P, Burnside B, Devel. Biol 1981, 85, 446. [DOI] [PubMed] [Google Scholar]
  • [62].Beckner M, in Studies in the Philosophy of Biology, (Eds: Ayala FJ, Dobzhansky T), University of California Press, Berkeley, CA: 1974, 163. [Google Scholar]
  • [63].Auletta G, Ellis GFR, Jaeger L, J. R. Soc. Interface 2008, 5, 1159. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [64].Goodwin BC, Temporal Organization in Cells: A Dynamic Theory of Cellular Control Processes, Academic Press, New York: 1963. [Google Scholar]
  • [65].Rosen R, Dynamical System Theory in Biology: Stability theory and its applications, Wiley-Interscience, New York: 1970. [Google Scholar]
  • [66].Rosen R, in Biological mechanisms in aging : conference proceedings, (Ed: Schimke RT), National Institute on Aging, Bethesda, Maryland: 1981, 107. [Google Scholar]
  • [67].Jaeger J, Monk N, J. Physiol 2014, 11, 2267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [68].Kaplan D, Glass L, Understanding Nonlinear Dynamics, Springer-Verlag, New York: 1995. [Google Scholar]
  • [69].Tsuchiya M, Giuliani A, Hashimoto M, Erenpreisa J, Yoshikawa K, PLoS One 2016, 11, e0167912. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [70].Gorelik E, Landsittel DP, Marrangoni AM, Modugno F, Velikokhatnaya L, Winans MT, Bigbee WL, Herberman RB, Lokshin AE, Cancer Epidemiol. Biomarkers Prev 2005, 14, 981. [DOI] [PubMed] [Google Scholar]
  • [71].Ross JS, Hatzis C, Symmans WF, Pusztai L, Hortobágyi GN, Oncologist 2008, 13, 477. [DOI] [PubMed] [Google Scholar]
  • [72].Bigbee WL, Gopalakrishnan V, Weissfeld JL, Wilson DO, Dacic S, Lokshin AE, Siegfried JM, J. Thorac. Oncol 2012, 7, 698. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [73].Schneider TD, Nucleic Acids Res. 2010, 38, 5995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [74].Friston K, Nat. Rev. Neurosci 2010, 11, 127. [DOI] [PubMed] [Google Scholar]
  • [75].Adam Y, Kim JJ, Lou S, Zhao Y, Xie ME, Brinks D, Wu H, Mostajo-Radji MA, Kheifets S, Parot V, Chettih S, Williams KJ, Gmeiner B, Farhi SL, Madisen L, Buchanan EK, Kinsella I, Zhou D, Paninski L, Harvey CD, Zeng H, Arlotta P, Campbell RE, Cohen AE, Nature 2019, 569, 413. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [76].Briggs JA, Weinreb C, Wagner DE, Megason S, Peshkin L, Kirschner MW, Klein AM, Science 2018, 360, pii: eaar5780. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].Hopfield JJ, Proc. Natl. Acad. Sci. USA 1974, 71, 4135. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [78].Ninio J, Biochimie 1975, 57, 587. [DOI] [PubMed] [Google Scholar]
  • [79].Yarus M, Trends Biochem. Sci 1992, 17, 130. [DOI] [PubMed] [Google Scholar]
  • [80].Alon U, An Introduction to Systems Biology : design principles of biological circuits, Chapman & Hall / CRC, Boca Raton, FL: 2007. [Google Scholar]
  • [81].Bialek WS, Biophysics : searching for principles, Princeton University Press, Princeton, NJ: 2012. [Google Scholar]
  • [82].Noether E, Nachr. D. König. Gesellsch. D. Wiss. Zu Göttingen, Math-phys. Klasse 1918, 1918, 235. [Google Scholar]
  • [83].Thompson DAW, On Growth and Form, Cambridge University Press, Cambridge, UK: 1952. [Google Scholar]
  • [84].d'Abro A, The Rise of the New Physics, Dover, New York: 1951. [Google Scholar]
  • [85].Mahon B, Man who changed everything : the life of James Clerk Maxwell, Wiley, Hoboken, NJ: 2003. [Google Scholar]
  • [86].Wigner EP, Communic. Pure Appl. Math 1960, 13, 1. [Google Scholar]
  • [87].Born M, in Physics in My Generation, (Ed: Born M), Springer-Verlag, New York: 1969, 132. [Google Scholar]
  • [88].Jaynes ET, Phys. Rev 1957, 106, 620. [Google Scholar]
  • [89].Wright S, J. Agricult. Res 1921, 20, 557. [Google Scholar]
  • [90].Spirtes P, Glymour C, Scheines R, Causation, Prediction, and Search, Springer, 1993. [Google Scholar]
  • [91].Iwasaki Y, Simon HA, Artif. Intell 1994, 67, 143. [Google Scholar]
  • [92].Dash D, Druzdzel MJ, in Proc. Fifteenth Ann. Conf. Uncertainty in Artif. Intell.(UAI-99), Morgan Kaufmann, San Francisco: 1999, 142. [Google Scholar]
  • [93].Pearl J, Causality: Models, Reasoning, and Inference, Cambridge Univ. Press, Cambridge, UK: 2000. [Google Scholar]
  • [94].Glymour C, Zhang K, Spirtes P, Front Genet 2019, 10, 524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [95].Ay N, Discrete App. Math 2009, 157, 2439. [Google Scholar]
  • [96].Hoel EP, Albantakis L, Tononi G, Proc Natl Acad Sci U S A 2013, 110, 19790. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [97].Glymour C, Danks D, Glymour B, Eberhardt F, Ramsey J, Scheines R, Spirtes P, Teng CM, Zhang J, Synthese 2010, 175, 169. [Google Scholar]
  • [98].Spirtes P, in Proc. 11th Conf. Uncertainty Artif. Intell, (Eds: Besnard P, Hanks S), Morgan Kaufmann, San Mateo: 1995, 491. [Google Scholar]
  • [99].Koster JTA, Annals Statist. 1996, 24, 2148. [Google Scholar]
  • [100].Glymour C, The Mind's Arrows: Bayes Nets and Graphical Causal Models in Psychology, MIT Press, Cambridge, MA: 2001. [Google Scholar]
  • [101].Dash D, Druzdzel M, in ECSQARU 2001, Springer-Verlag, Berlin: 2001, 192. [Google Scholar]
  • [102].Ramsey JD, Hanson SJ, Hanson C, Halchenko YO, Poldrack RA, Glymour C, Neuroimage 2010, 49, 1545. [DOI] [PubMed] [Google Scholar]
  • [103].Huttegger S, Skyrms B, Tarres P, Wagner E, Proc Natl Acad Sci U S A 2014, 111 Suppl 3, 10873. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [104].Gough L, Grace JB, Ecology 1999, 80, 882. [Google Scholar]
  • [105].Jones DP, Sies H, Antiox. Redox Signaling 2015, 23, 734. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [106].Williamson L, Saponaro M, Boeing S, East P, Mitter R, Kantidakis T, Kelly GP, Lobley A, Walker J, Spencer-Dene B, Howell M, Stewart A, Svejstrup JQ, Cell 2017, 168, 843. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [107].Bizzarri M, Masiello MG, Giuliani A, Cucina A, Bioessays 2018, 40. [DOI] [PubMed] [Google Scholar]
  • [108].Glass L, Mackey MC, From Clocks to Chaos: The Rhythms of Life, Princeton University Press, Princeton, NJ: 1988. [Google Scholar]
  • [109].Improta R, Santoro F, Blancafort L, Chem. Rev 2016, 116, 3540. [DOI] [PubMed] [Google Scholar]

RESOURCES