Skip to main content
Biophysical Journal logoLink to Biophysical Journal
. 2021 Jun 30;120(19):4252–4263. doi: 10.1016/j.bpj.2021.06.028

The second law: information theory and self-assembly

Govind Menon 1,
PMCID: PMC8516636  PMID: 34214530

Abstract

An interplay between algorithmic and mechanistic viewpoints is necessary to quantify the rates of transition between different states in developmental pathways. An ab initio method that uses only information theory and geometry to model the conformational entropy of linkages is discussed. This approach also reveals some ties between biological questions and the foundations of mathematics.

Significance

This work makes three main contributions. First, an algorithmic framework for morphogenesis is illustrated with a geometric model for self-folding polyhedra. The framework is not limited to self-assembly. It may be adapted to other instances of morphogenesis, such as the partition of mitotic domains, which may be described by geometric decompositions. Second, the application of a stochastic relaxation algorithm to compute typical conformations of linkages (such as models for long-chain molecules) is described. Finally, the work discusses the use of information theory as a mathematical foundation for morphogenesis through an examination of the distinction between geometric properties determined by visual intuition and those determined by a process of measurement.

Introduction

The purpose of this work is to illustrate the utility of an algorithmic view of morphogenesis. The heart of the matter is this: we model developmental pathways as a series of random jumps between different states of being. This viewpoint is demonstrated with geometric models where the states of being are easily visualized and a principled theory may be used to answer a critical quantitative question—how long does a system stay in a particular state?

It is helpful at the outset to place this formalism within a broader context. The idea that life is based on an interplay between algorithms and randomness—a genetic code in constant evolution through interaction with its environment—is a profoundly seductive paradigm for a mathematical scientist. It touches on at least three major themes in 20th century mathematics: the birth of computer science, the formalization of probability theory, and the search for unifying structure in geometry. And as Conway’s game of life reveals (1), it is also great fun!

But there is an epistemological tension between the unifying power of abstraction, which underlies all forms of mathematical reasoning, and the detailed analysis of the particular that is central to biology. Although cellular automata reduce life to an interplay between abstract principles such as replication and competition, it is hard to escape the feeling that such abstraction does not provide “real answers” for a pragmatic biologist. Surely the algorithmic discovery of a new “life form” in the game of life cannot mean very much when contrasted with the diversity of the tree of life filtered through 3.5 billion years of evolution.

Or does it? The contrast between artificial and natural life runs through all forms of genetic modification, and it seems prudent to hedge one’s bets, distinguishing between fundamental limits, i.e., problems that can be resolved in principle given enough computational power, and those that can be resolved pragmatically, i.e., studied in wet labs with modern experimental techniques. Mathematical models that abstract the essence of biological truth, such as replication and competition in the game of life, play a critical role in bridging this divide. The goal of this work is to illustrate some lessons learnt from one such model system: self-folding polyhedra. This is a model experimental system on the mesoscale (nanometer to millimeter) that sheds light on the self-assembly of chemical and biological systems with similar geometry. It is suited to interdisciplinary investigation, as it allows the use of “off-the-shelf” mathematical techniques to resolve practical challenges in the lab, as well as a sharper look at foundational questions in mathematics—the nature of space and what exactly it means to compute.

The study is structured into two parts to demonstrate both of these aspects. In the longer technical part, we review a discrete geometric paradigm for self-assembly and explain how to use a formulation of graph embeddings developed by the author to compute rates of transition in self-assembly. This part of the study illustrates the use of algorithmic principles and minimal geometric reasoning to provide a unifying mathematical perspective on self-assembly. The second part of the study, under the subheading Does life compute? in the Results and discussion, is a reflection on the interplay between mathematics and biology that arises when the above questions are situated within broader developments in mathematics and the sciences. This part of the study places the computational scheme within its true mathematical context: the role of information theory in geometry and in the Bayesian conception of models.

The underlying viewpoint is applicable to problems in morphogenesis, such as embryology, which may be modeled as jumps between states. The extension of the methodology of this work to such systems requires the isolation of a minimal set of rules, typically based on geometric decompositions, followed by the analysis of developmental pathways determined by these rules. These rules are strongly dependent on the organism and developmental process being studied. We hope that the visual simplicity of folding—the geometric metaphor chosen in this work—will provide sufficient hints for the discovery of similar rules in other contexts. The methodology itself is not restricted to folding. The role of visual imagination and a decomposition of cell development into well-defined stages run through Foe’s seminal work on the partitions of mitotic domains (2). As discussed in Does life compute?, the nature of geometric decomposition and morphogenesis can be interpreted more broadly with information theory.

Materials and methods

Discrete geometry and self-assembly

Let us first briefly describe the experimental work and mathematical framework that stimulated the ideas described in this work.

Self-assembly of polyhedra

An important theme in nanotechnology is the use of biological metaphors to design small scale devices. Examples that stimulated our work are Seeman’s use of DNA as a construction material on the nanoscale (3) and the understanding of ATP synthases as molecular engines (4, 5, 6). Such an interplay between biology and technology is primarily driven by experiments. We will focus on the role of mathematics in a model problem: the self-assembly of polyhedra. This class of problems is narrow enough to admit precise mathematical structure. It is also broad enough to encompass the study of simple organisms such as many single-stranded RNA viruses (7), the chemistry of C60 (8) and supramolecular “buckyballs” (9), DNA containers (10,11), and self-folding polyhedra (12).

In its most abstract setting, folding is a technique for constructing three-dimensional objects from a two-dimensional sheet. This formalism is rich enough to encapsulate the art form of origami, so it is clear that there is a great deal that one can construct. But, as anyone who has tried their hands at origami knows, there is also a great deal that can go wrong. What distinguishes origami from a crumpled sheet of paper is an algorithm: a precise description of a sequence of folds that converts a sheet of paper into a desired shape.

Self-folding is the technology (and art form) of constructing a three-dimensional shape on micro- and nanoscales without the explicit use of an algorithm. Instead, one must use physics to guide a two-dimensional shape into its final three-dimensional form. The experiments in the Gracias lab rely on surface tension and thermal fluctuations (13). A schematic of this method is as follows. Suppose our goal is to construct a simple shape such as a cube. Imagine cutting the cube along edges, flattening it into a shape that consists of a collection of rigid squares joined at the uncut edges that remain. Such a two-dimensional shape is called a net (see Fig. 1). More generally, all convex polyhedra may be unfolded into nets.

Figure 1.

Figure 1

Unfolding the cube. The folding algorithm is introduced in (12). At each step, all edges on a face are unglued until it is free to rotate by the dihedral angle (90° in this example, but see Fig. 4). This image is from (12). To see this figure in color, go online.

The main idea in surface-tension driven self-folding is to begin with such a net, patterned by electrolithography, with a low-melting point material such as solder deposited on the edges between faces. The solder melts when the net is released into a high boiling solvent, and its surface tension causes the faces to rotate relative to one another (see (14) for details of the experiments). There is no explicit control on the sequence of folds, and in the first approximation, one must assume that all the faces begin to rotate simultaneously. In practice, therefore, some of the nets fold successfully into the desired shapes, whereas many do not. An empirical test of the role of the initial net in self-folding was first carried out in (13). All possible nets of the cube were patterned on a substrate and self-folded. They were then examined for defects under an optical microscope and graded into four categories, and the yield of each grade as a fraction of initial nets was evaluated (13, §3).

The mathematical question that stimulated our interest is this: how does one design initial nets for self-folding with maximal yield? The catch is a combinatorial explosion: whereas a cube has 11 nets, a dodecahedron has 43380! Matters get worse very fast for even the best understood polyhedra (see (15), Table 1). Thus, in practice, successful self-folding requires a careful resolution—experimental and theoretical—of an inverse problem that involves optimizing yield criterion over a massive data set of initial nets.

Configuration spaces for self-assembly

The process of folding and unfolding a polyhedron admits a natural theoretical framework. We model self-assembly beginning with a net as a sequence of jumps between partially formed intermediate states, in which the states differ in the manner in which their edges are glued. The assembly of polyhedra from these rules are shown in Figs. 1, 2, 3, and 4. A distinct assembly process is shown in Fig. 5.

Figure 2.

Figure 2

Combinatorial configuration space for the cube. The assembly pathways for a self-folding cube beginning at all 11 nets are computed using the unfolding algorithm of Fig. 1. A perspective view is adopted to visualize intermediate states. Observe that each intermediate may typically adopt infinitely many different conformations because the faces are free to rotate about edges unless kinematic constraints forbid it (e.g., when three squares are glued at a corner). Each edge on this graph is weighted by a symmetry factor that accounts for the number of ways in which faces may be glued or unglued to transform one intermediate to another. This image is from (12). To see this figure in color, go online.

Figure 3.

Figure 3

Labeled assembly pathways from two nets. These figures demonstrate the complexity of pathways originating from two distinct nets. The faces are labeled with letters and the edges with numbers to keep track of information that is not included in Fig. 2. To see this figure in color, go online.

Figure 4.

Figure 4

Combinatorial configuration spaces for the ocathedron. The assembly pathways for an ocathedron beginning with all 11 nets are shown. The nets may misfold into an isomeric “boat” configuration when two edges meet at the wrong dihedral angle. Both sets of pathways are illustrated in this figure, along with images of some intermediate states. State 83 is the octahedron; state 84 is its isomer. Red paths have the correct dihedral angle; black paths are misfolded. To see this figure in color, go online.

Figure 5.

Figure 5

The building game. A polyhedron is built by the attachment of a single face to a contiguous group of faces. This model has been proposed for fullerenes and viral capsids (16,17). Intermediate states in the combinatorial configuration spaces for the cube and octahedron are represented in the figure above by Schlegel diagrams. The lightly shaded faces are empty sites; the darkly shaded faces are occupied. The intermediate consists of a contiguous collection of darkly shaded polygons that is built by attaching one face at a time. The full combinatorial configuration space for the octahedron includes other states along with symmetry factors that count their multiplicity (cf. Fig. 1). To see this figure in color, go online.

This framework separates the combinatorial and geometric aspects of self-assembly. The distinction is as follows. The term “combinatorial” reflects the information contained in the gluing rules. In chemical terms, it reflects whether a particular bond has been formed or not. The adjective “geometric” reflects the fact that an intermediate state with given combinatorial information may be realized in three-dimensional space, R3, in many different ways. In chemical terms, it reflects that the same molecule has infinitely many conformations. Each such conformation is an “isometric embedding,” for which the term isometric means that the length of any given edge is the same in every conformation.

Our work (12,18) focused on the combinatorial aspects of self-assembly. An algorithm was introduced to unfold a given polyhedron into nets through a sequence of elementary moves. This algorithm may be used to exhaustively enumerate all intermediate states for smaller polyhedra, as well as to sample intermediate states for larger polyhedra. This model was tested in two ways.

  • 1)

    Heuristics obtained in the experiments with cubes were combined with the algorithm to obtain candidate nets for the dodecahedon and truncated octahedron. These nets were then tested in the lab. The truncated octahedron has more than two million nets, yet it may be self-assembled with high yield by such algorithmic design (12).

  • 2)

    Folding pathways computed with the algorithm were compared with folding pathways in the lab, observed by optical microscopy ((18), Figs. 13, 14, and 15). Such information is not experimentally accessible for self-assembling polyhedra on smaller scales (e.g., fullerenes and viruses).

Exhaustive enumeration is not a feasible strategy for larger polyhedra. However, the same framework may be used to sample large subsets of all intermediates (see (18), Fig. 16).

This work illustrates the utility of mathematical modeling to resolve certain practical challenges in self-folding. However, for a mathematician, the main attraction of such graph-based models is that they provide a unified description for self-assembly. This goes roughly as follows:

  • 1)

    Discrete geometric idealizations have been proposed for several fundamental self-assembly processes. These include lattice-based models for the assembly of long-chain molecules (19) and models for the self-assembly of polyhedra that differ in the underlying rules for attachment and detachment (16,17,20,21) (cf. Figs. 1 and 5).

  • 2)

    Intuitive notions of intermediate states and pathways of assembly can be formalized in a unified way using a graph we term the combinatorial configuration space C = (V, E). Each vertex vV denotes an intermediate state. Two states u and v are neighbors linked by an edge e if they differ in the formation of a bond (Figs. 2 and 4).

  • 3)

    The kinetics of self-assembly is modeled as a Markov chain on C. The use of Markov chains on graphs is a standard technique well suited to fast computation. To apply this framework to C, we must enrich C to include two positive weights on each edge eE. These correspond to the forward and backward reaction rates along e.

The main bottleneck in upscaling this framework is that it is not feasible to obtain these rates of transition from experimental data. The problem again is a combinatorial explosion. The number of edges in C is typically orders of magnitude larger than the number of vertices (15, Table 1). This prevents the extraction of rates of transition based on experimental observations within accepted statistical paradigms for parameter estimation. It is therefore necessary to compute rates ab initio using a theoretical method. But what theory should we use?

Conformational diffusion of linkages

Motivation for the model

The appeal of discrete geometric models for self-assembly is their conceptual minimalism. The phase space C provides a simple description of assembly pathways. Further, the underlying framework can be adapted to other problems in morphogenesis that may be modeled as a set of jumps between different states on a graph. Thus, we must determine rates of transition with a similar minimalism. The question then is as follows: can one enrich C to include rates using geometric reasoning alone?

Our claim is that a good model for the conformational diffusion of linkages is all that is required to resolve this problem. Here, we use the term diffusion in the mathematical sense (a diffusion is a solution to a stochastic differential equation). A reader uncomfortable with this terminology may loosely equate the phrase “conformational diffusion” with “internal vibrations.” We use the former phrase because we work with linkages that are composed of rigid subunits freely rotating about hinges. Thus, there is no underlying mass or stiffness matrix and no vibrational frequencies in the conventional sense.

An ab initio geometric model for determining the rates of transition for the building game was explored in (15). The intermediate states in the building game consist of partially formed shells. The determination of rates of transition between two partially formed shells may be related to a physical caricature for the formation of fullerenes as follows. We assume we begin with a gas of monomers corresponding to the individual faces of the fullerene. The monomers diffuse in space, sticking with some probability when they collide. Thus, monomers form dimers, the dimers form trimers, and so on, as shown in Fig. 5. Each k-mer is a polyhedral linkage that consists of a collection of rigid faces joined at edges. The linkage adopts different conformations because the faces are free to rotate about hinges. The conformational diffusion of k-mers is the primary factor in determining the rates of transition. This is because the probability of attachment of rigid faces to an intermediate state is higher when the intermediate state adopts a conformation that allows a rigid face to dock at the correct angle. This idea is illustrated with a simpler linkage in Fig. 6.

Figure 6.

Figure 6

Conformational diffusion and attachment kinetics. This figure illustrates the formation of a four-bar linkage by random attachment of rigid rods. The rods are assumed to rotate freely about the hinges at the end. The resulting conformational diffusion determines the rates of self-assembly. Here, we illustrate the last step, in which a single bar must attach to a three-bar linkage. The image on the left is a favorable conformation, the image on the right is unfavorable. The modeling task is to associate probabilities to each of these conformations, using minimal additional assumptions. To see this figure in color, go online.

Conceptually, this idea has its roots in the interplay between geometry and function in enzymes. The idea that binding is determined by conformational geometry is motivated by Boyer’s flip-flop theory for the ATP synthases (22). For the F0F1, nucleotides attach and detach at catalytic binding sites in synchrony with the conformational changes of the molecule so that it functions as a Wankel rotary engine on the molecular scale (4). Experimental resolution of the SERCA pump reveal a similar, but more intricate, interplay between catalysis and conformational changes (6). Our model does not include any biochemistry; however, focusing on the role of conformational diffusion allows us to retain the underlying geometric insight.

The role of conformational diffusion in other models of self-assembly is just as fundamental. The main insight from (12,18) is that partial rigidity of intermediates is the limiting factor in successful self-folding. An intermediate state vC is a partially formed polyhedron that has some floppy parts and some rigid parts (e.g., when three squares are glued at a corner). It is natural to determine rates by formalizing the idea that the faces of the partially formed intermediate rotate freely about hinges, with edges sticking when they are sufficiently close.

These caricatures may be extended to the HP (hydrophobic-hydrophilic) model of protein folding and models for the formation of viral capsids in a similar way. In all these models, we have a state space C as well as a system of algebraic equations that imposes geometric constraints. If we can design a fast subroutine that computes the conformational diffusion of a state vC, it allows us to compute the rates of transition between neighbors on the graph C. That the same structure describes self-assembly on completely different scales and physics is no longer the primary concern; what matters is that all these models may be studied with standardized tools in a systematic manner.

Stochastic relaxation for hard constraint systems

The purpose of this section is to illustrate the applicability of the author’s recent work on the isometric embedding problem to conformational diffusion (23). This goes as follows: 1) each solution to the embedding problem for an intermediate is a candidate spatial conformation during self-assembly; 2) the stochastic flows in (23) provide an additional probability distribution on conformations, thus providing a means to compute the odds of faces being favorably aligned for edge gluing. Implementations of these ideas for the complete configuration spaces in Figs. 2, 3, and 4 lie beyond the scope of this work, as the numerical methods of (23) must first be optimized on benchmark problems in graph embedding before being compared with experimental data on self-folding. However, the simplicity of the basic scheme is demonstrated in Eq. 3 below.

A linkage is an assembly of rigid units such as rods and faces attached at vertices and edges. In mathematical terms, it is a graph G = (V, E) along with a positive function ρ(e) that gives the length of each edge. The set of conformations of a graph in an ambient space, typically denoted Rq, is a function u:VRq such that

u:VRq,|u(e+)u(e)|=ρ(e),eE, (1)

where e± denote the vertices on either end of the edge e and |v| is the length of a vector vRq. For example, when we consider the three-bar linkage in Fig. 6, G has four vertices, three edges of unit length, and q = 2. The rigid body mode may be removed by fixing the position of one edge.

The set of all conformations of a linkage defined by (G, ρ), denoted M, is the set of solutions to Eq. 1. This formulation tells us that the study of linkages is nothing but the study of Eq. 1. However, as noted in Fig. 6, it is not enough to know what the solution set M looks like. Not all conformations are conducive to assembly, and what we really want is a natural probability distribution on M that tells us which conformations are likelier than others.

This problem is knottier than one expects at first sight. First, all manifolds may be precisely approximated by the solution sets of Eq. 1 (24); informally, every surface, no matter how tortuous, can be built out of linkages! More generally, Eq. 1 is an important example of hard constraint systems, and there are few general techniques for their study. First, the nature of the linkage has (unsurprisingly) a sharp effect on M and much of the algebraic and computational theory of linkages is devoted to formulating precise notions of rigidity and flexibility (25). Second, although the intuitive notion of random walk on a surface or hypersurface is not hard to visualize (take a random step in space, then project it onto the surface), this does not work very well for Eq. 1 because the solution set M typically has singular points. This makes it difficult to design efficient numerical schemes that combine methods developed for algebraic equations such as Eq. 1 with Monte Carlo schemes (see (26) for an interesting recent contribution in this direction).

A natural fix is to modify Eq. 1, replacing hard constraints with some form of soft constraint. The most immediate choice is to modify the underlying model from a linkage composed of rigid units to a spring-mass system. This is not satisfactory for the following reasons:

  • 1)

    When we introduce ad hoc physics, we lose the scale independence that geometric models of self-assembly provide. That is, to draw general quantitative lessons that apply (for example) to both self-folding polyhedra on the millimeter scale and fullerenes on the nanometer scale, we must treat both experiments within a minimal framework. The most important concept for which one must seek a unified treatment in this problem is the “conformational entropy.” That is, one must account for the degeneracy of solutions to Eq. 1 in some way, for example, by computing a suitable generalized volume for M. Such entropic contributions are less easy to see with a spring-mass system.

  • 2)

    Equation 1 applies to problems in machine learning such as clustering. For example, a function u:VR allows us to partition V into sets (say two sets corresponding to whether u is positive or negative). In this context, there are no physical regularization parameters such as mass or spring stiffness. It is therefore necessary to modify Eq. 1 in a manner that uses as little additional structure as possible, so that the same formalism applies in different areas of application.

  • 3)

    Replacing hard constraint systems with spring-mass systems raises new issues for numerical simulations. To resolve all scales, it is necessary to understand the limit of infinite stiffness and zero mass. However, spurious high-frequency oscillations make this task numerically challenging.

We approach the problem of conformational diffusion with a stochastic relaxation scheme. This scheme involves three main ideas:

  • 1)

    Relaxation. We relax Eq. 1 to a set of subsolutions. Here, these are defined by the equation

v:VRq,|v(e+)v(e)|<ρ(e),eE. (2)
  • This idea is commonly used for hard constraint systems in computer science (27). In our work, it is strengthened by including a principled stochastic scheme to improve a subsolution to a solution.

  • 2)

    Stochastic control theory. We introduce a continuous time random walk for an augmented variable (ut, Lt) consisting of a subsolution ut and a Gaussian filter Lt. At each time t, the evolution of the subsolution is an Itô differential equation of the form

dut(x)dut(y)=L˙t(x,y)dt,L˙t=LtS(ut)Lt. (3)
  • 3)

    Semidefinite programming (SDP). Here, S(v) is a positive semidefinite matrix that depends on a subsolution v only through the residuals r(e):= |v(e+) − v(e)| − ρ(e). Roughly, S is chosen to penalize the residual and vanishes when r does. It is computed by an SDP.

Equation 3 pushes a subsolution ut “upward” to a solution u by a process of constant wiggling that leads to an expansion of lengths on average. Fig. 7 illustrates the stochastic flow schematically.

Figure 7.

Figure 7

Stochastic relaxation for a three-bar linkage. A random configuration of the three-bar linkage is obtained as the t → ∞ of subsolutions ut to Eq. 1. For each t, the lengths of the bars are shorter than the true lengths. However, this relaxation provides wiggle room that allows controlled random fluctuations to expand the length of the bars on average. The numerical scheme is discussed in (23, §5). To see this figure in color, go online.

Equation 3 is derived from an information theoretic interpretation of the measurement of length by successive approximations. The subsolution ut provides an estimate of the length of edge e at scale t, and the covariance L˙ is the most unbiased estimate for correction at scale t, given a penalty function. Further, although Eq. 3 is defined on information theoretic grounds, it corresponds to a classical physical picture: quasistatic evolution of a thermal system (ut, Lt). Several choices of S(v) are possible. However, each choice relies on the same thermodynamic foundations, so that the dependence S(v) is analogous to an equation of state (23, §4, 5).

The current bottleneck is the need to benchmark this scheme, optimizing the choice of S, before applying it to problems in self-assembly. The use of SDP provides a fast and stable numerical method, supported by theoretical guarantees (polynomial time convergence) and reliable software. The limit u = limt→∞ut is always guaranteed to exist, though it may be a subsolution, not a solution. Thus, even in situations in which Eq. 1 does not admit solutions (e.g., mapping a triangle into the line), the method provides a principled approximate solution.

This concludes the technical part of this work. The next section is an informal account of the origin and context of these ideas.

Results and discussion

Does life compute?

Schrödinger’s little book (28) led a generation of physicists to the nascent field of molecular biology. The closest parallel within mathematics are Turing and von Neumann’s investigations of computing and the human mind, leading to the creation of computer science. The purpose of this section is to explain a sense in which self-assembly sits at a fascinating intersection of these disciplines. It is structured as an essay, primarily for a biologically minded reader, about the intimate ties between self-assembly and the foundations of mathematics.

This essay is also a counterpoint to an interesting work by Gromov (29) that begins with the question: is there mathematics in biology? His answer, and ours, is an emphatic yes. But this is the sort of rhetorical question that should be treated with seriousness, as the underlying sentiment should be rephrased for the nonmathematician as “is there ‘real’ mathematics in biology?” Of course, this leads to the question of what exactly constitutes “real” mathematics because the relation between the aesthetic, pragmatic, and utilitarian aspects of the subject are a matter of perpetual debate. But as far as the relationship between mathematics and the sciences goes, it reflects the primacy of physical law in determining the evolution of mathematical thought since Newton and some ambivalence, even today, of the role of probabilistic reasoning in the foundations and practice of mathematics.

Our purpose here is to provide a candid account of what some aspects of biology look like from within mathematics. The discussion is structured around the three themes raised in the introduction to this work: the formalization of computation in terms of effective procedures, the role of probability theory, and the search for unity in geometry. These questions were carefully studied in the 1930s–1950s in a period when applied mathematics emerged as a discipline and Schrödinger wrote his book. We wish to demonstrate their continued vitality within the context of self-assembly. Then, as now, it seems best to begin with the second law of thermodynamics.

Information theory, geometry, and thermodynamics

Shannon’s creation of information theory was directly motivated by the problems of telecommunication (30). We interpret his work more broadly as a principled foundation for thermodynamics, in particular of living systems, by providing a sharper understanding of entropy and Gibbs distributions.

To illustrate this point, let us consider Shannon’s models for language. Markov introduced the idea of a Markov chain to formalize style in poetry, but it is Shannon who first understood the full power of this idea. He modeled text as a stationary Markov chain on generalized alphabets and used his model to estimate the entropy of the English language (31). The modern descendant of this idea is a Bayesian paradigm of learning that goes roughly as follows: assume the world consists of random signals; construct a probabilistic model that is capable of generating such signals (such as a Markov chain); then, given observational data, use Bayes’ rule to infer the best fit between model and data. Variants of the same basic scheme may be used to model many problems of cognition, including speech recognition, machine translation, and face recognition (32). The success in practice of this paradigm comes with many bells and whistles; for example, real language cannot be modeled so simplistically (33), and deep learning does much better on these cognition tasks today. What matters for us is the dramatic expansion of the applications of mathematics made possible by mathematically minded gamblers (34).

Information theory differs from 19th-century thermodynamics and statistical mechanics in three fundamental ways. First, the underlying abstraction is a probability space—informally, a set of events and a set of internally consistent rules for associating probabilities to events. It has no a priori relation to models of “physical” or “visual” space. Despite such abstraction, the concept of entropy acquires new meaning as the fundamental limit on data compression or the depth of search in a yes-or-no game such as 20 questions.

Second, the physical idea of equilibrium acquires a minimal form, despite the absence of “true” time and space. The probability space is a set of infinite sequences in an alphabet, which may be further winnowed to a space of statistically equivalent “typical sequences.” Time evolution is reduced to shifting step by step along such sequences. This idea was motivated by telecommunication, but its abstract character gives it great versatility, including, of course, applicability to genetic codes. The notion of equilibrium is reduced to the idea of minimal, but persistent, background noise leaving the space of typical sequences invariant under shifts.

Finally, Shannon’s deepest result—the channel coding theorem—provides a sharp microscopic understanding of fundamental limits imposed by the second law of thermodynamics. Mathematicians today see heat flow as steepest descent of (information theoretic) entropy. This formulation vastly expands the physical idea of diffusion process while being firmly rooted in classical foundations. It is these foundations that dictate our formulation of conformational diffusion in Conformational diffusion of linkages, in particular our insistence on minimal geometric reasoning.

The reappearance of the concept of entropy in information theory after its birth in the kinetic theory of gases (in the 19th-century work of Boltzmann, Clausius, Gibbs, and Maxwell 35) led to the initial speculation that information theory is a branch of statistical mechanics. However, in the 1950s, Jaynes showed that statistical mechanics may itself be derived from information theory. More precisely, Jaynes treats information theory as a foundation for statistical inference by viewing a probability space as the fundamental abstraction and using the maximization of entropy as a principle to choose the most unbiased estimate for a random variable given partial information. The efficacy of this method is then demonstrated by recovering the laws of statistical mechanics in classical examples (36, §3, §5).

Our intuitive conception of entropy in the kinetic theory of gases and in information theory is different. In the first case, entropy is inextricably tied to cartoons of atoms jiggling around in three-dimensional space. In the second case, entropy is simply a number associated to a set of sequences. Although a space of sequences seems a more sterile abstraction, it provides the foundation from which all observable consequences of the classical theory are obtained with simple calculations. In particular, Jaynes avoids paradoxes of ergodicity that plagued kinetic theory in the 19th century.

This approach has disturbing philosophical implications, as it causes us to question whether there are any fundamental physical models or whether physics is “just” the best fit to observations of the natural world. The interpretation of the entropy of a discrete random variable as the optimal length of a code leads to the idea, developed mainly by computer scientists and statisticians, that the best models are those that have minimal description length (37). This view is an extreme counterpoint to physical theories, as it makes no assumptions on the structure of the source that produces the signal and states only that the best description of the data is the one that compresses data in the most efficient manner. In the minimal description length description, learning is data compression and there are no true models, not even Newton’s laws.

These examples illustrate a tension between “new” and “old” applications of mathematics. Shannon’s work extended the applications of mathematics by recognizing the utility of probabilistic reasoning in entirely new areas. Yet, information theory remains on the fringes of mathematics, not quite probability theory but a nebulous province situated somewhere between engineering, mathematics, and physics. For the most part, the view persists among mathematicians that the most successful models of nature are the differential equations of classical physics; the second law remains an afterthought. But in the information theoretic interpretation, the second law of thermodynamics is the fundamental physical principle because it reflects the almost tautological fact that a theory must carefully account for a process of measurement. It is therefore necessary to examine the traditional models from an adversarial intellectual position.

The stochastic relaxation algorithm presented in Conformational diffusion of linkages is a spinoff of such an adversarial attack on the problem of turbulence. Perhaps the most traditional models of applied mathematics are partial differential equations, such as the Euler equations that govern the motion of ideal incompressible fluids. Recent work in mathematics has revealed unexpected connections between these equations and Nash’s work in the 1950s on the foundations of geometry (38,39). This connection motivated us to return to Nash’s work, examining it through the lens of information theory. The relaxation scheme in Conformational diffusion of linkages retains essential technical insights from Nash’s works, but it is based on different conceptual foundations.

The main new idea is that isometric embedding should be seen as a process of replication by which an observer makes a copy of a given space by measurement at increasingly fine scales of the distances between each pair of points. The process of measurement is modeled with information theory, though it is necessary to augment Jaynes’ approach with spaces of sequences that change with time. All the stochastic relaxation schemes in (23) are generalized heat flows that transfer information between a source and an observer. They reveal an unexpected unity between Shannon’s channel coding theorem and Nash’s embedding theorems that allows us to view each limiting shape, u, as an optimal code.

Self-assembly and computing

The study of self-assembly also leads back to debates from the foundational era of computing. Let us begin at the mathematical beginning.

The creation of the computer relied in an essential way on investigations into the foundations of mathematics. These began in the late 1800s with Cantor’s work on the nature of infinity and evolved into fierce debates on intuition, logic, and the nature of proof by 1925 (40). By the end of the 1930s, these ideas had settled into the formalization of an algorithm as a set of narrowly defined procedures for determining the value of a function. The determination of the truth of a statement within a system of logic—Hilbert’s “Entscheidungsproblem”—could be reduced to the operation of a single construct: the universal Turing machine. The universal Turing machine differs from the other abstractions of computing in the sense that it allows visualization of a computer as a machine moving along a long tape, thus providing a template for the design of actual computers. It is an abstract construct of importance for engineering, much like the Carnot engine.

One of the most fascinating chapters in synthetic biology is the realization of Turing machines by biological means. Two striking examples are Adleman’s demonstration of a DNA computer that solves the traveling salesman problem and Winfree’s thesis on algorithmically patterned DNA lattices (41,42). In both these experiments, DNA is used to compute, in the sense of Turing machines, answers to model problems in computer science. The traveling salesman problem reflects the importance of algorithms; what is required here is a fast procedure to determine the minimum of a function defined on a finite set (in “human” mathematics, emphasizing existence proofs, this question is trivial). Winfree implemented a set of algorithms using DNA tiles with sticky ends. The connection between the tiling model and Turing machines is discussed below. DNA self-assembly is now a thriving field; these remarks are used to emphasize that early studies in the area are biological realizations of a formal mathematical view of computing.

Let us contrast this with physicists’ take on computation. The study of physical limits on computing was initially driven by the need to create stable logic gates using early transistors and vacuum tubes. However, in the 1960s, it evolved into a study of computing as a thermodynamic process. Bennett and Landauer’s work provides a profound understanding of the role of entropy and information theory in computing (43,44). In particular, by placing the emphasis on reversibility, the paradox of Maxwell’s demon in the kinetic theory of gases may be resolved. The main insight is that the transfer of information is reversible; it is the erasure of information that is irreversible. Fredkin and Toffoli provide another model of reversible computing, but now at zero temperature, designing logic gates with a system of elastic billiard balls (45). Their system may be seen as a zero-temperature limit of Bennett’s model. Finally, a third model, created by Feynman, further simplifies Bennett’s model of reversible computation (46).

These models too have important biological realizations, the simplest of which is the analysis of RNA polymerase as a reversible copying machine. In contrast with DNA computing, here the model of computing provides a theoretical lens to infer properties of a biomolecule designed by evolution. The theoretical model is not a framework for the design of a synthetic device.

These examples illustrate a continuity of thought between the biological, mathematical, and physical worlds. It is remarkable that two distinct foundational crises in the late 19th century—the nature of proof in mathematics and the role of time reversibility in physics—find expression today in models of self-assembly.

Geometry and the imagination

Our purpose in this work has been to provide easily visualized examples of morphogenesis. Let us revisit the above examples, distinguishing between abstractions that are formally equivalent and those that are easy to see.

In the mid-1950s, Hao Wang turned his attention from the philosophy of mathematics to logic. His work on tilings emerged from a broader study of automated theorem proving. An early computer program written by Wang established the 350 propositions of Russell and Whitehead’s Principia with ease (47). These results were established within the classical logical framework of the predicate calculus. Although the results are of broad interest, detailed arguments in formal logic are impenetrable to most mathematicians. Thus, Wang’s work on domino tiles began as a method to communicate his results on automated theorem proving to other scientists. In a brief, elegant study he showed that the Entscheidungsproblem could be reformulated as a question about growth patterns with tiles with given matching rules (48). As a formal equivalent, his model adds nothing new to the concept of the Turing machine. But the tiles provided a valuable geometric visualization of the Turing machine. That it is this model that underlies DNA self-assembly causes us again to reflect on the distinction between visual intuition and formal logic.

The role of visual imagination runs through reversible computing too. Bennett defines a computer as “an engine that transforms free energy into waste heat and mathematical work.” Mathematical work consists of two aspects: calculation and proof, both of which are formalized with Turing machines. Yet, the reversible computers proposed by Bennett (43), Fredkin and Toffoli (45), and Feynman (46) are geometric constructs more than they are physical constructs. These are the hyperbolic geometry of billiards for the Fredkin-Toffoli model, confined Brownian motion in Feynman’s model, and a particularly creative kinematic linkage in Bennett’s Brownian Turing machine (see (43), Fig. 7). As in Wang’s work, each of these models adds to our comprehension through visualization. Discussions on whether these abstractions can be realized physically are of greater practical than theoretical importance. Once it is established that these models are equivalent and that they augment Turing machines by imposing a time-reversal symmetry, the question of which is better reflects human intuition and needs, not thermodynamics.

The most mysterious aspect of mathematics is the thin line between the rational and irrational, the search for patterns that “feel right” and those that are “proved.” Wang’s work showed rather effortlessly that Russell and Whitehead’s Principia, perhaps the most patient attempt at mathematical formalization ever, is, from a mechanistic standpoint, utterly trivial. No student of mathematics can fail to be inspired by the persistence of Euclid’s axiomatic system for plane geometry as a model for mathematical theories, but here too Wang shows that the system may be mechanized.

These problems have broader interest in artificial intelligence. Tasks of cognition, such as image classification (e.g., is a cat present in an image?), can be formulated as a function between an input (a digitized image) and an output (1 or 0). Given the ease with which Euclid’s opus could be formalized, surely tasks such as this should be easy to solve? Yet, several problems in computer vision remain stubbornly unresolved. Mumford makes the case that a compelling message of the development of artificial intelligence is the repeated triumph of statistical models of learning over logic-based systems of learning (49). That is, it is not mathematical models of computing that won, but a mathematical model of communication.

But these battles are not over. The recent success of deep learning over statistical learning theory (the cat has been found!) harks back to an even older battle in computing, that between analog and digital. Again, DNA tiling offers a fascinating means to explore the boundaries between these distinct views of computing, communication, and human intuition.

Information theory and morphogenesis

What gives geometric reasoning such force? It is again helpful to contrast developments in biology and mathematics.

The earliest attempts to classify organisms are by scale and shape. The work of the Dutch artist and collector, Albertus Seba, provided Linnaeus a template for his taxonomy. Here, too, we see the interplay between the synthetic and the natural; Seba’s work includes flights of fancy and imaginary beasts, classified by Linnaeus as Paradoxa: organisms of speculation, not yet created. And it was the visual ability to distinguish between the shapes of beaks that provided Darwin a critical hint on evolution, an insight that when stripped of context is no different than the everyday manner in which we remark on the similarity between a child and a parent.

The study of geometry carries a particular resonance for all mathematicians. Euclid’s arguments have been unchanged in their essence for 2000 years across many human cultures. The robustness and evolution of mathematics has strong similarities with language. The first formalization of grammar, the Sanskrit grammar of Panini, parallels the axiomatization of geometry by the Greeks. In both linguistics and mathematics, one sees a persistent dichotomy between intuitive notions of rules, easily understood by children, and formalizations that are often incomprehensible to practitioners, unless these are mediated with careful communication using examples. Nor is this dichotomy restricted to these areas; linguistic metaphors govern many cultural studies, and the linguistic studies of De Saussure and Jakobson played a critical role in Levi-Strauss’ work in cultural anthropology.

The emotional appeal of visual beauty runs through studies of morphogenesis. Who can fail to be inspired by D’Arcy Thompson’s meticulous cataloguing of the shapes of organisms? Intrinsic mathematical beauty carries its own seduction. The renowned mathematician Rene Thom treats structural stability, a mathematical notion from low-dimensional geometry, as a basis for his theory of morphogenesis (50). It feels churlish to dismiss such romanticism, but as a basis for the study of life, both approaches have serious flaws. Modern biochemistry shows quite convincingly that a far better basis for the classification of life is the analysis of genetic sequences and conserved chemical reactions in the spirit of Woese, not a visual classification like Thompson’s. As for Thom, the work has not aged well. Today it seems a mathematical flight of fancy, akin to Seba’s cabinet of curiosities: interesting as a cultural artifact, perhaps, but not as a systematic science.

The problem is that vision is not geometry. Nor even is it a foundation for reason. A geometric sketch may capture the essence of an argument for humans, but it is not a formal proof. This dichotomy is one of the reasons that the nature of space holds such deep fascination for mathematicians. A triumph of 19th century mathematics was the freedom obtained when Euclid’s parallel postulate was shown to be superfluous. Riemann made a profound distinction between hypotheses about the nature of space and properties that may be determined through the measurement of length.

The distinction between intrinsic and extrinsic notions of space is at the heart of the Nash embedding theorems. For example, we may imagine a two-dimensional ant crawling on linkages like those in Fig. 2 or a three-dimensional grad student measuring the lengths in a lab. Equation 1 simply says that their measurements of length must agree. The grad student certainly observes more features of the linkage as it twists in space, but as far as measurements of lengths go, these are the same object. The fundamental adversarial position in our approach to these theorems is that we place the emphasis on a random process of successive approximation by which length is measured, not the clever but “visual” devices that Nash uses (23). More broadly, the above examples are chosen to illustrate that mathematical definitions of space take many forms—obviously geometric objects such as polyhedra, certainly, but also spaces of languages, phylogenetic trees, and chemical reactions. These spaces may sometimes be visualized, but a full appreciation requires information theory.

Returning to self-assembly, we see that the study of these model systems forces us to confront some basic questions: is morphogenesis limited to the study of the change of shape, and if so, what is shape and how does it emerge from the expression of data in genetic codes? Our view is that a principled approach to morphogenesis must be rooted in a more flexible notion of space, so that the same abstract principles may be used to model learning tasks such as the acquisition of speech and motor skills by toddlers. In principle, both these skills can be formalized information theoretically using different sets of sequences, even if naively they deal with different notions of space.

Why should such abstractions be of interest to a biologist? Of all the problems of perception, none seems to us to be more fundamental than the perception of space. The world of mammals coexists with the world of birds, bacteria, plants, and extreme life forms in the deep oceans. It is not prudent to restrict ourselves to a notion of space that is defined primarily by human vision, nor indeed should the study of morphogenesis be restricted to naive classifications of space. A more scrupulous account of spatial interaction between different life forms must be based strictly on a comparison between their internal measurements of space. This form of reasoning is better described by information theory, and we find it valuable to insist on it as a foundation for morphogenesis.

Acknowledgments

This work was supported by the National Science Foundation (DMS 1714187), the Simons Foundation (Award 561041), the Charles Simonyi Foundation, and the School of Mathematics at the Institute for Advanced Study, Princeton.

Editor: Stanislav Shvartsman.

References

  • 1.Conway J. The game of life. Sci. Am. 1970;223:4. [Google Scholar]
  • 2.Foe V.E. Mitotic domains reveal early commitment of cells in Drosophila embryos. Development. 1989;107:1–22. [PubMed] [Google Scholar]
  • 3.Seeman N.C. DNA in a material world. Nature. 2003;421:427–431. doi: 10.1038/nature01406. [DOI] [PubMed] [Google Scholar]
  • 4.Elston T., Wang H., Oster G. Energy transduction in ATP synthase. Nature. 1998;391:510–513. doi: 10.1038/35185. [DOI] [PubMed] [Google Scholar]
  • 5.Noji H., Yasuda R., Kinosita K., Jr. Direct observation of the rotation of F1-ATPase. Nature. 1997;386:299–302. doi: 10.1038/386299a0. [DOI] [PubMed] [Google Scholar]
  • 6.Shinoda T., Ogawa H., Toyoshima C. Crystal structure of the sodium-potassium pump at 2.4 A resolution. Nature. 2009;459:446–450. doi: 10.1038/nature07939. [DOI] [PubMed] [Google Scholar]
  • 7.Caspar D.L., Klug A. Physical principles in the construction of regular viruses. Cold Spring Harb. Symp. Quant. Biol. 1962;27:1–27. doi: 10.1101/sqb.1962.027.001.005. [DOI] [PubMed] [Google Scholar]
  • 8.Kroto H., Heath J., Smalley R. C60: Buckminsterfullerene. Nature. 1985;318:162–163. [Google Scholar]
  • 9.Liu Y., Hu C., Ward M.D. Supramolecular Archimedean cages assembled with 72 hydrogen bonds. Science. 2011;333:436–440. doi: 10.1126/science.1204369. [DOI] [PubMed] [Google Scholar]
  • 10.Bhatia D., Mehtab S., Krishnan Y. Icosahedral DNA nanocapsules by modular assembly. Angew. Chem. Int. Ed. Engl. 2009;48:4134–4137. doi: 10.1002/anie.200806000. [DOI] [PubMed] [Google Scholar]
  • 11.Douglas S.M., Dietz H., Shih W.M. Self-assembly of DNA into nanoscale three-dimensional shapes. Nature. 2009;459:414–418. doi: 10.1038/nature08016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Pandey S., Ewing M., Menon G. Algorithmic design of self-folding polyhedra. Proc. Natl. Acad. Sci. USA. 2011;108:19885–19890. doi: 10.1073/pnas.1110857108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Azam A., Leong T.G., Gracias D.H. Compactness determines the success of cube and octahedron self-assembly. PLoS One. 2009;4:e4451. doi: 10.1371/journal.pone.0004451. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Leong T.G., Lester P.A., Gracias D.H. Surface tension-driven self-folding polyhedra. Langmuir. 2007;23:8747–8751. doi: 10.1021/la700913m. [DOI] [PubMed] [Google Scholar]
  • 15.Johnson-Chyzhykov D., Menon G. The building game: from enumerative combinatorics to conformational diffusion. J. Nonlinear Sci. 2016;26:815–845. [Google Scholar]
  • 16.Wales D. Closed-shell structures and the building game. Chem. Phys. Lett. 1987;141:478–484. [Google Scholar]
  • 17.Zlotnick A. To build a virus capsid. An equilibrium model of the self assembly of polyhedral protein complexes. J. Mol. Biol. 1994;241:59–67. doi: 10.1006/jmbi.1994.1473. [DOI] [PubMed] [Google Scholar]
  • 18.Kaplan R., Klobušický J., Menon G. Building polyhedra by self-assembly: theory and experiment. Artif. Life. 2014;20:409–439. doi: 10.1162/ARTL_a_00144. [DOI] [PubMed] [Google Scholar]
  • 19.Dill K.A., Ozkan S.B., Weikl T.R. The protein folding problem. Annu. Rev. Biophys. 2008;37:289–316. doi: 10.1146/annurev.biophys.37.092707.153558. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Russell E.R., Menon G. Energy landscapes for the self-assembly of supramolecular polyhedra. J. Nonlinear Sci. 2016;26:663–681. [Google Scholar]
  • 21.Twarock R. A tiling approach to virus capsid assembly explaining a structural puzzle in virology. J. Theor. Biol. 2004;226:477–482. doi: 10.1016/j.jtbi.2003.10.006. [DOI] [PubMed] [Google Scholar]
  • 22.Boyer P.D. The ATP synthase--a splendid molecular machine. Annu. Rev. Biochem. 1997;66:717–749. doi: 10.1146/annurev.biochem.66.1.717. [DOI] [PubMed] [Google Scholar]
  • 23.Menon G. In: Geometric Science of Information. GSI 2021. Lecture Notes in Computer Science. Nielsen F., Barbaresco F., editors. Vol. 12829. Springer; Cham: 2021. Information theory and the embedding problem for Riemannian manifolds; pp. 605–612. [Google Scholar]
  • 24.Akbulut S., King H. On approximating submanifolds by algebraic sets and a solution to the Nash conjecture. Invent. Math. 1992;107:87–98. [Google Scholar]
  • 25.Demaine E., O’Rourke J. Cambridge University Press; Cambridge, UK: 2008. Geometric Folding Algorithms: Linkages, Origami, Polyhedra. [Google Scholar]
  • 26.Zappa E., Holmes-Cerfon M., Goodman J. Monte Carlo on manifolds: sampling densities and integrating functions. Commun. Pure Appl. Math. 2018;71:2609–2647. [Google Scholar]
  • 27.Goemans M.X., Williamson D.P. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. Assoc. Comput. Mach. 1995;42:1115–1145. [Google Scholar]
  • 28.Schrodinger E. Cambridge University Press; New York, NY: 1951. What is Life? The Physical Aspect of the Living Cell. [Google Scholar]
  • 29.Gromov M. Crystals, proteins, stability and isoperimetry. Bull. Amer. Math. Soc. (N.S.) 2011;48:229–257. [Google Scholar]
  • 30.Shannon C.E., Weaver W. The University of Illinois Press; Urbana, IL: 1949. The Mathematical Theory of Communication. [Google Scholar]
  • 31.Shannon C.E. Prediction and entropy of printed English. Bell Syst. Tech. J. 1951;30:50–64. [Google Scholar]
  • 32.Mumford D., Desolneux A. AK Peters, Ltd.; Natick, MA: 2010. Pattern Theory: The Stochastic Analysis of Real-World Signals. [Google Scholar]
  • 33.Chomsky N. Three models for the description of language. IRE Trans. Inf. Theory. 1956;2:113–124. [Google Scholar]
  • 34.Diaconis P. The Markov chain Monte Carlo revolution. Bull. Amer. Math. Soc. (N.S.) 2009;46:179–205. [Google Scholar]
  • 35.Garber E., Brush S.G., Everitt C.W.F., editors. Maxwell on molecules and gases. MIT Press; 1986. [Google Scholar]
  • 36.Jaynes E.T. Information theory and statistical mechanics. Phys. Rev. 1957;106:620–630. [Google Scholar]
  • 37.Rissanen J. Springer; New York: 2007. Information and Complexity in Statistical Modeling, Information Science and Statistics. [Google Scholar]
  • 38.Nash J. C1: isometric imbeddings. Ann. Math. 1954;60:383–396. [Google Scholar]
  • 39.Nash J. The imbedding problem for Riemannian manifolds. Ann. Math. 1956;63:20–63. [Google Scholar]
  • 40.Weyl H. Princeton University Press; Princeton, NJ: 2009. Philosophy of Mathematics and Natural Science. [Google Scholar]
  • 41.Adleman L.M. Computing with DNA. Sci. Am. 1998;279:54–61. [Google Scholar]
  • 42.Winfree E. California Institute of Technology; 1998. Algorithmic self-assembly of DNA. PhD thesis. [Google Scholar]
  • 43.Bennett C.H. The thermodynamics of computation—a review. Int. J. Theor. Phys. 1982;21:905–940. [Google Scholar]
  • 44.Landauer R. The physical nature of information. Phys. Lett. A. 1996;217:188–193. [Google Scholar]
  • 45.Fredkin E., Toffoli T. Conservative logic. Int. J. Theor. Phys. 1982;21:219–253. [Google Scholar]
  • 46.Feynman R.P. CRC Press; Boca Raton, FL: 2018. The Feynman Lectures on Computation. [Google Scholar]
  • 47.Wang H. Toward mechanical mathematics. IBM J. Res. Develop. 1960;4:2–22. [Google Scholar]
  • 48.Wang H. Science Press Beijing; Kluwer Academic Publishers; Beijing; Dordrecht: 1990. Computation, Logic, Philosophy: A Collection of Essays, Vol. 2 of Mathematics and its Applications (Chinese Series) [Google Scholar]
  • 49.Mumford D. In: Mathematics: Frontiers and Perspectives. Arnold V., Atiyah M., Lax P., Mazur B., editors. American Mathematical Society; 2000. The dawning of the age of stochasticity; pp. 197–218. [Google Scholar]
  • 50.Thom R. CRC Press; Boca Raton, FL: 2018. Structural Stability and Morphogenesis. [Google Scholar]

Articles from Biophysical Journal are provided here courtesy of The Biophysical Society

RESOURCES