Abstract
Natural life is chemical. Chemistry, not abstract logic, determines and constrains its potentialities. One of the potentialities is cognition. Humans have two equivalent cognitive systems: the immune and the nervous ones. The principle of functioning is the same for both: rooted in the previously acquired and embodied knowledge, the system is intrinsically generating many new chemical states and the environment selects and stabilizes appropriate of them. From the fundamental level of complicated brain chemistry (“biochemese”) higher levels emerge: the physiological (“physiologese”) and the mental (“mentalese”). Processes are causal at the basic chemical level; they are mere isomorphic, tautological translations at the other levels. The thermodynamic necessity to maintain correlations in the complicated chemical system and to generate variants makes the nervous system energetically expensive: it runs continuously at full speed and external inputs only trigger and modulate the ongoing dynamics. Models of the brain as a universal computer are utterly inadequate.
Keywords: bioenergetics, brain, cognition, computation, neurobiology, thermodynamics
Introduction
The twentieth century has been designated by Evelyn Fox Keller as the century of the gene.1 According to Donald Kennedy, the twenty-first century may become the century of the brain.2 Edward O. Wilson observed:3 “Much of the history of modern philosophy, from Descartes and Kant forward, consists of failed models of the brain. All that has been learned empirically about evolution in general and mental processes in particular suggests that the brain is a machine assembled not to understand itself, but to survive.” Best models of the brain have always been inspired by peak achievements of science and have compared the brain to the most sophisticated machines of the time. No wonder that in our time the computer metaphor, with the main concept of “information processing”, has been fashionable in both neurosciences and neurophilosophy.
None of the proposed models has proved to be satisfactory. Human knowledge may have not yet advanced enough to grasp the brain and the nervous system. Instead of proposing models, a novel approach may be more modest in its ambition: it may first attempt to establish what in construction and functioning of brain and mind is impossible in principle. It would thereby circumscribe the space of possibilities that any realistic model should take into account. After all, impossibility statements are the very foundations of science.4 This paper analyzes the constrains and impossibilities imposed on the brain by energetics. It has been stated that cognitive biology can be considered as an outgrowth from, and a successive development of, bioenergetics.
The Crucible of Terrestrial Evolution: Life and Brain Are Chemistry
Natural life (n-life), as has evolved on Earth, is chemical. Chemical energy, i.e., electromagnetic energy involved in rearrangement of electrons on the bonding orbitals of atoms, is the main energy of life. Conceivable is life based on other than chemical principles. Virtual life (v-life), mainly based on cellular automata, thrives on screens of computers. Artificial life (a-life), in the form of self-replicating robots, will soon complement the n-life and according some visionaries may once supersede it. Somewhere in the cosmos life may function not as chemical, but as electrical or mechanical systems, or even in forms unimaginable to humans.
In contrast to ordinary mechanics, chemistry is the science of emergence.5 Chemical interactions differ from other kinds of interactions, such as mechanical combinations of Lego parts. Putting together two pieces of Lego does not bring about a qualitative change, but rearranging electrons in atoms of hydrogen and oxygen to produce a molecule of water creates a novelty, which, at least in a description available to human subjects, has not been inscribed in the precursor atoms. Already in the 19th century John Stuart Mill recognized this difference.6 He draws a distinction between two causal modes, the mechanical (homopathic) and the chemical (heteropathic). According to Mill, when two or more causes combine in the mechanical mode to produce a certain effect, the effect is the sum of what would have been the effects of each of the causes had it acted alone (in the contemporary terms, we call such interacting systems linear). In the chemical mode, an effect produced is in no sense the sum of what would have been the effect of each cause acting alone. Chemistry, and thus n-life as well, quite naturally abounds in emergencies.
Biochemistry is not an “ordinary” chemistry. In contrast to common chemical processes, which are scalar, biochemical processes are vectorial. Peter Mitchell, who was rewarded by a Nobel Prize for this discovery, is no less important for comprehension of terrestrial life than is Charles Darwin. Proteins entail on biochemistry vectoriality, due to their structural asymmetry. They have been selected in biological evolution to bind ligands and in this way give significance to the ligands as specific aspects of the environment. It is this purpose of the specific binding that makes protein teleonomic structures and imparts to protein-ligand interaction the unique character of molecular cognition. Proteins exhibit molecular sentience—the capacity to incessantly sample conformational substates, one of which becomes stabilized upon recognizing the appropriate ligand. Enzyme catalysis and signal transductions occur as a result of the binding of specific ligands to complementary pre-existing states of a protein and the consequent shift in the equilibria. Proteins are the elementary epistemological units of life.7
Because life is based on chemistry, emergent phenomena at various levels of life hierarchy are as natural, but also as unpredictable, as inevitable and as unequivocal, as is the emergence of water from hydrogen and oxygen. This also applies to the brain as a chemical system. Consciousness itself may be a specific emergence in a complicated chemical system, a feature specific to a highly evolved n-life, and absent in other kinds of life. To put it metaphorically, biochemical processes in the nervous system speak to us, human observers, in their level-specific language, “biochemese”. These processes are organized in space and time and they speak, at a higher level of hierarchy, in a different language, “physiologese”. At still a higher level, the complexity of biochemical interactions achieves a form of subjective experience and, eventually, consciousness, with a corresponding level-specific language, “mentalese”. As properly stressed by Steven Rose, between the biochemical and physiological descriptions—and one can add consequentially, between biochemical and psychological descriptions—there is not a causal, but a mapping relationship.8 The statement that the chemical level, the interactions of atoms and molecules, determines unequivocally all the other, upper, levels, can be dubbed the principle of radical materiality. The upper levels cannot causally determine the lower levels. In lively discussions on “mental causation”, still in vogue among philosophers, the notion of “top-down causation” or “downward causation” is often used, the latter referring to its introduction in 1974 by Donald Campbell.9 Occasionally, scientists do also use this term (reviewed in ref. 10). What Campbell wanted to describe by introducing it is the fact that “all processes at the lower level of a hierarchy are restrained by and act in conformity to the laws of the higher level”.9 Indeed, it is so, not only in the case of the mind, but also of computer software or of any human-made artefacts.10 It even applies to natural selection, in which “the biosphere act/s/ downwards on molecules of DNA”.11 However, as Paul Davies explains it in his paper “The physics of downward causation”, there are no new forces involved in this “downward action” (recalling the failed forces or causatives agencies, such as the ether, the élan vital, psi forces), “top-down talk refers not to vitalistic augmentation of known forces, but rather to the system harnessing existing forces for its own ends”.11 Alfred Lotka, a biological visionary, wrote already in 1925:12 “To say that a necessary condition for the writing of these words is the willing of the author to write them, and to say that a necessary condition for the writing of them is a certain state and configuration of the material of his brain, these two statements are probably merely two ways of saying the same thing.”
Many standard models of brain and mind ignore the domination of chemistry. One of the founding fathers of modern research of cognition, George Miller did not mention chemistry among sciences which, in his view, constitute cognitive sciences.13 Standard computational models of cognition admit that the brain is a “structural and functional realization” of cognition, but, in principle, there is no reason why the activities of the brain could not be implemented into a different kind of “hardware”. It does not import whether the material anchoring of cognition is represented by the biochemical structure of the brain or by silicon microchips. One can infer from the professional careers of three pioneers of brain modeling why, after the Second World War, the thinking on the brain proceeded in this one-sided, and probably erroneous, way. Norbert Wiener, the founder of cybernetics, was engaged during the war in studies on anti-aircraft fire control. It may have been in this work that he conceived of the idea of considering the brain of the human operator as part of the steering mechanism and of applying to the brain the concepts of input and output, information, feedback and stability, which had been devised for mechanical systems and electric circuits. John von Neumann also served as a consultant to the armed forces during the war. He realized the necessity of massive computations and showed that computers, with all of the instructions hard-wired, could be made much more flexible by being equipped with programs. No wonder that he subsequently transferred his ideas of computation and program from computers to the brain, visualizing the brain as a programmable computer.14 Marvin Minsky, a pioneer of artificial intelligence (AI), has modified his views several times, and to his latest book has even given the title “The emotion machine”, yet he continues to profess that the computers and the brain operate on the same principle: emotions themselves are just a specific form of computation.15
André Lwoff put it accurately:16 “The organism does not handle concepts of grade or logarithms of probabilities. The organism handles atoms or molecules and the energy of light or of chemical bonds.” It took time to realize that cognition cannot be separated from its concrete material substrate and that, in human cognition, it is the body that plays the pivotal role. In the last two decades, the idea of the “embodied cognition” may have been slowly superseding the formal, neo-Cartesian, views on the nature of mind and cognition as disembodied ethereal “information processing” (reviewed in refs. 17–19). Cognition is a material process. It is, as are all material processes in the universe, inseparably linked to energy transformations.
Thermodynamics of Biological Evolution: Knowledge as Thermodynamic Height
Thermodynamics, the science of irreversible energy flow from high-quality energy sources down to thermal sinks, has been called the queen of chemistry. As natural life is chemical, thermodynamics should be considered to be the queen, the real ruler, of terrestrial life, too. It has been said that Darwin's theory of evolution by variation and selection has been the most essential breakthrough in the human view on nature, separating the history of human thought into two periods: before Darwin (B.D.) and after Darwin (A.D., anno Darwini). Yet, Charles Darwin could not be aware of the importance of thermodynamics for his doctrine. He realized that “our ignorance of the laws of variation is profound”, but, obviously, could not see the second law of thermodynamics behind the uncorrelated variations: his book “On the origin of species” appeared in 1859, Clausius formulated the second law of thermodynamics in 1865. When biologists later, much under the impression of the book of physicist Erwin Schrodinger “What is life?”,20 recognized this fact, the second law was largely interpreted as the feature of the universe to tend to achieve states of increased disorder (measured as entropy), meaning, at the same time, the flow of energy “downhill”, its dissipation, “devaluation”. Life was considered to be subjected to the second law, but arranged is such a way as to oppose or slow down this universal tendency. In his influential book “Chance and necessity” Jacques Monod pictured life as organized systems tending to preserve their organization against the destructive effect of the second law.21 As he put it, “For modern theory, evolution is not a property of living beings, since it stems from the very imperfections of the conservative mechanism which indeed constitutes their unique privilege. And so one may say that the same source of fortuitous perturbations, of ‘noise’, which in a nonliving (i.e., nonreplicative) system would lead little by little to the disintegration of all structure, is the progenitor of evolution in the biosphere and accounts for its unrestricted liberty of creation, thanks to the replicative structure of DNA: that registry of chance, that tone-deaf conservatory where the noise is preserved along with the music.”
It was Monod's contemporary, physicist Ilya Prigogine. who stressed that the sword of the second law of thermodynamics is double-edged.22 If a human observer focuses his/her attention to a selected part of the universe, a system, and considers the rest of the universe as the environment, the system may not just preserve its organization or increase it by sampling a chance, as envisaged by Monod, but self-organize by itself because of inherent tendency of some systems to increase their complexity. The system is running “uphill”, its order is increasing. But this is only possible at the “expense” of the environment, in which energy dissipation becomes more intense. Other scientists, among them biologists, have elaborated Prigogine's arguments,23–27 proposing a reformulation of the second law or, in opinion of some of them, extending it to a new thermodynamic law. According to Schneider and Kay, although the second law is a statement about increasing disorder, it also plays a central role in creating order, nature “abhors” gradients.24 “The thermodynamic principle which governs the behavior of systems is that, as they are moved away from equilibrium, they will utilize all avenues available to counter the applied gradients. As the applied gradients increase, so does the system's ability to oppose further movement from equilibrium.” Structuring is a way of how to increase the rate of energy dissipation.
At its very base, evolution of life on Earth is nothing else but a manifestation of the second law of thermodynamics. Wherever in the universe thermodynamic conditions of temperature, pressure and chemical composition allow chemical processes, structuring sets in. Our Earth is probably just one of the cosmic “white holes”, at which local dissipations of energy are running at ever increasing speed. If the source of energy is constant, as is the flow of energy from Sun to Earth, the growth of dissipation has a character, which Alfred Lotka anticipated in 1922:28 “Evolution proceeds in such direction as to make the total energy flux through the system a maximum compatible with the constraints.” Or, in the words of Eric Chaisson, it has a character of ever increasing rate of energy flow through a system of a given mass, that is, of increasing “free energy rate density”.27
From the vantage point of the present stage of terrestrial evolution we can look over all the previous stages and range them in a sequence of accelerating dissipation. The first, the simplest stage was the dissipation of solar (and perhaps of other sources) energy gradients in synthesis of simple organic compounds—the phase of prebiotic syntheses. Later, organic compounds were assembled into various dissipative structures, including elementary protocells, which maintained their internal order by continual dissipation. Some of them gained the quality of ontotelic systems—organized systems which “aimed at” preserving their permanence, onticity, by directing the flow of energy through them before its full dissipation. It has been proposed to label such ontotelic systems the “subjects”. The propensity of the world, ensuing from the second law of thermodynamics, to create subjects, has been dubbed “subjectibility”. Subjectibility may be seen as a third “substance” of the world, along with matter and energy.29 Another intensification of the total energy flux through the system set in when some ontotelic systems got the ability to make copies of themselves, to replicate. An important new stage appeared when the replicative ontotelic systems became sentient—they were continually displaying alternative states, of which the one recognizing a relevant feature of the environment became stabilized. It has been already mentioned that proteins are the basic chemical entities exhibiting sentience at the simplest, molecular level. It is from this stage that the voraciously dissipating systems may be called living systems. Sentience, and hence cognition, may be defined as the demarcating characteristic of life. Cognitive systems have markedly increased the densification of energy flow. The emergence of systems with higher-level sentience, in particular of the nervous system, brought about additional densification. Cultural evolution, a new recent type of evolution, has become several orders faster than the biological evolution and it may be nowadays complemented by still much faster technoscientific evolution.
The reason why cognition has become the most accelerating factor of evolution is straightforward: the growth of knowledge, noogenesis, is autocatalytic, and hence exponential or even hyperbolic. In the simplest case, an increase of knowledge is linearly dependent on the already existing knowledge, dK/dt = c.K, K = ect. Evolution of knowledge has a character of a Bayesian ratchet: accumulation of knowledge is an incremental process of changing probability of existing justified beliefs in the light of new evidence.30 According to Hans Kuhn, in the course of evolution organisms gain in quality.31 The quality represents knowledge, and it is measured by the total number of bits to be discarded until the evolutionary stage under consideration is reached. This Kuhn's measure of knowledge is related to a similar measure of complexity, for which Seth Lloyd and Hans Pagels introduced the term “thermodynamic depth”.32 (For a more detailed description see ref. 29.) Knowledge embodied in a system corresponds to its epistemic complexity; the greater the knowledge the greater the epistemic complexity. But rather than characterizing it by the notion “depth”, a more appropriate term would be the “thermodynamic height”. “Depth” corresponds to the sum total of entropy, which had been produced in the past to reach the present state, while “height” is a measure of work capacity associated with the present state. Any gain of knowledge means an increase in capacity to do work on the environment, because of higher placement of the subject in an “epistemic field”—just as work done by a weight depends on its elevation in the gravitational field. To describe the advancement in scientific description of the world, John Smart used a metaphor of climbing the “Wigner's ladder”, wherein the spacing of each new rung gets ever closer together and easier to attain.33 This metaphor pictures well the hyperbolic increase of knowledge acquisition. But even if a new knowledge is easier to attain when a lot of previous knowledge, the Bayesian priors, is already available, a gain of any new knowledge continues to be an energetically costly process. Knowledge is new if it cannot be foreseen, so acquiring new knowledge cannot be, by definition, a deterministic process; it can only results from trials and failures. This is why the majority of actual cognitive transactions, even in humans, continue to be based on the use of previously acquired, embodied, knowledge. Functioning of mammalian immune system is a case in point.
The Immune System: A Fundamental Lesson
Besides the nervous system, the immune system is the second cognitive device of every human individual. For survival, the immune system is no less important than is the nervous system. This fact may largely escape us for a simple reason: we are not consciously aware of its functioning. The existence of the nervous system has been known for at least two and half millennia, while the first knowledge about the immune system was gained only at the turn of the 19th to 20th century.
The immune system recognizes and destroys antigens, substances that are foreign to an individual organism. Antigens are being recognized by antibodies, proteins from the group of immunoglobulins. In a single second, the system recognizes and destroys thousands of “enemies”. Antigens are parts of products of other organisms, which invaded the organism, and according to a standard view, the immune system evolved to defeat the invaders. But is also plays no less important role in discrimination of foreign proteins from its own ones (and hence in “self-identification”) and also in detection and elimination of those proteins or cells of the organism which, by a process of somatic mutations, became estranged and malignant, e.g., cancer. Just as the nervous systems may have evolved from devices which had originally served for recognition and coordination of organism's own cells and only later enlarged its function to interact with the organism's surroundings, the immune system may have had a similar origin: according to Jerne et al., its initial role was the internal recognition and not the defense from the aliens.34
The immune system is capable to recognize practically unlimited kinds of chemical compounds. Even such compounds that are not present in nature and have been synthesized by chemists. When biochemists started to study the mechanisms of immune interactions they quite naturally visualized an antigen, a foreign molecule of a precise structure, as a rigid molecule that somehow enforces its shape upon an immunoglobulin, a molecule freely shapeable—in this way a molecular recognition, and at the same time, embodiment of knowledge, would take place. According to a model of Linus Pauling, an antibody would be like a wax and an antigen like a seal which left its imprint in the wax.35 This is not the case. The immune system is ceaselessly generating millions of distinct antibodies, each with its specific chemical shape. When an antigen enters the body, the corresponding antibody is already present, ready to act. We can say that an organism contains a repertoire of all possible chemical images of the external world. As aptly characterized by Piattelli-Palmarini, “nothing is ever ‘new’ to this system; the repertoire of existing antibodies constitutes a ‘network’, an interactive system of ‘internal images’ of all possible external forms, a repertoire which is ‘complete’ and ‘closed’. If an antigen, per absurdum, were indeed new to the system, the system could do nothing at all with it. A really new antigen would be literally invisible to the immune system.”36 Antigen does not induce the formation of antibody, it does not provide any instruction to the immune system, and it selects from the rich repertory of antibodies those antibodies that chemically correspond to it. Organisms have simply prepared themselves in advance for all possible alternatives. We have here another type of molecular sentience: many possible interpretations of the world are continually sampled, but only one of them is being stabilized upon finding a corresponding “image” in the external world (process of clonal expansion of a particular antibody and of immunological memory). Knowledge remains embodied to be ready for successive use—sort of “molecular bayesianism”.
To need a single type of antibody and instead of it to synthesize hundred millions of diverse types, to invest energy into something that will never be used, may appear to be an incomprehensible wastage. But let us consider another conceivable chemical alternative. The shape of a protein, including the spatial distribution of electric charges on its surface, is determined by the sequence of amino acids. Because of molecular sentience, the conformation of the protein is not static, the protein continuously samples conformational substates. This does not mean, however, that the conformation of the protein can vary substantially and arbitrarily. To put it figuratively, the protein molecule cannot operate as a human hand, which, thanks to its flexibility, can grasp an object of any possible shape. If an organism had not the prepared repertoire of antibodies, if would need, after being invaded by a particular antigen, to start blindly synthesize antibodies one by one until it hits the right one. One molecule of immunoglobulin consists of 1,300 amino acids. It would mean 201300 trials, an absolutely impossible number. To scan all the space of possibilities a time longer than is the age of the universe would be needed, even if each of a hundred billion B-lymphocytes (the typical number in a human individual) were engaged, with a maximum speed of protein synthesis (15 amino acids per second). And the energy required would not correspond to the power of the human body (100 W), but to that of a nuclear plant. These are the reasons why the immune system operates by the single possible way: the Darwinian one. In such a case, its energetic costs are relatively minor.37
The immune system as a cognitive device is substantially limited by its single mode of cognition: the chemical one. It recognizes proteins; and low-molecular compounds only if they are bound to proteins as haptens. In evolution, this limitation has rather quickly exhausted its evolutionary potential: even though mice and humans had a common ancestor 75 million years ago, the mouse and human immune systems are essentially equally powerful.
The Central Nervous System: The Fraternal Twin of the Immune System
The nervous system differs from the immune system by possessing several cognitive modes. It receives from the environment not only chemical signals, but also tactile, visual and acoustic ones. But also these other signals are eventually translated and processed in the chemical form. If the Darwinian way of variation and selection is the only energetically admissible principle of operation of the immune system, is this true of the nervous system as well?
Niels Jerne, a pioneer of research on the immune system, was among the first scientists who maintained that this may be the case.38 The idea was later elaborated by Jean-Pierre Changeux in his theory of selective stabilization of neuronal connections in ontogenesis,39 and by Gerald Edelman in his conception of neural Darwinism.40 Neural development involves two phases: in the first one, there is an intrinsic, activity-independent overproduction of cortical structures; at the mental level, they correspond to diversity of representations, analogous to the diversity of antibodies. In the second phase, those that are poorly matched by inputs from the environment are eliminated (for a review, with a critical appraisal, see ref. 41). It is a process to a certain extent similar to the stabilization of an antibody, which finds a corresponding antigen while other antibodies, which do not encounter their “partners” in the environment, fade away.
In mammals, neurogenesis of the newborn animal is essentially terminated while it is still in the uterus, so that the environment, which selects the arrangements of neurons and their synaptic connections, is the internal environment of the bodies of mother and young. And the basic layout is obviously determined by genes. Human neurogenesis seems to be exceptional: the formation of the brain continues long after birth and signals from the external environment participate in the process.42–44 In two critical periods, shortly after birth and again in puberty, the external environment affects structuring of the brain by a process named by Konrad Lorenz the “imprinting”.45 Lorenz originally discovered imprinting in birds, and birds continue to be favored animals in its studies; mainly as a firm fixation of the newborn to the first object it has taken for mother. There is little doubt that imprinting should be even more important and more general in humans; and so it is surprising how little we have known of it so far. “Imprinting” may not be the most appropriate name for the process: it suggests a mechanism similar to that of Linus Pauling, mentioned above,35 which proved to be wrong. What does not hold for antibodies may not hold for the brain either: imprinting may not be inducing brain structures, may not determine synaptic connections, but rather select from those already available: the selected ones are stabilized, the others dismissed. Genes and imprinting essentially fix the human brain on its basic, chemical level; and since the higher levels are just translations of the basic one, at those other levels too: genes and imprinting lay down the skeleton of a personality.
Pursuing this line of reasoning, Darwinian principles of uncorrelated variations and selection should apply not only to brain development in ontogenesis, but also to the activities of the mature brain. Just as the immune system displays to the chemical environment of antigens a rich array of antibodies, the central nervous systems should display to its multimodal environment most diverse chemical states. They can be ranged into three categories. To the first category belong the states determined by genes embodying knowledge, which accumulated in the evolution of the species. The second category comprises the states which have been set up in the individual brain by imprinting during the two critical periods—they are idiosyncratic, peculiar to every individual and can little change in his/her lifetime. The third category is represented by the states that the brain generates anew and displays them for testing and approval or disproval. Even in the case of the third category, signals from the environment do not induce anything new and appropriate to them, but only select from what is already at their disposal. A signal from the environment is not something that carries “information” that should be “processed”, it is a selector, trigger, it selects from readymade processes and the selected one set off.
One would object straightaway: it cannot be so! Is it not true that the light from a face which we look at reflects to our retina and activates, point by point, photosensitive receptors, and is not this enormous amount of signals transferred to the visual cortex, where there are “computationally” processed and reconstructed into the form of the object observed? However, it need not be so. We know that there are specific areas, and specific neurons, in the human brain for face recognition.46,47 We recognize a face quickly and just from a few hints, because we already carry in our brains a module of the average face, which the data from the environment only trigger and make more precise. In the same way, we can quickly grasp a word because the brain has already its meaning in store.48 This is also the reason why we can carry out complex movements easily and accurately: data from fMRI indicate that the brain contains a model of movements, which is only updated through learning to improve matching.49 Triggering images, processes and events that are already present in the brain, ready-made, may be a solution, or part of a solution of the old puzzle: how the brain can be fast if neurons are slow?50 How can the brain recognize an object almost immediately if neurons fire electric signals ten million times more slowly than an ordinary computer?51
Some significant categories of objects or behaviors, particularly those vitally important, are built up in the brain as single modules. In unicellular organisms cognition and behavior consist almost exclusively from such large, coarse modules. In evolution, the size of modules was getting smaller and their number much larger, and so it is not easy to notice that the principle of displaying full-prepared states and of selection from them continues to hold, even in humans. A telling example is courting, one of the stages of sexual behavior. In Drosophila, male courting consists of stereotyped motor activities.52,53 There are the same in all males and the activities follow one another in a precise sequence; if the sequence is interrupted, a male has to begin courting from the very start. Each step of this stereotyped behavior is controlled by a corresponding gene. Learning can modify the intensity of courting, but not its sequence. Sexual courting of humans is much less stereotyped, but in substance it may not differ from that of Drosophila. Only modules, which the human courting consists of, are more fine-grained and more numerous. This entails to human courting its flexibility, a favored subject of artistic accounts. Genes for courting in Drosophila are hierarchically nested and dominated by a single master gene. The same is apparently the case of human courting: a few genes high in hierarchy, determining neuroendocrinal activities of hypothalamus, keep fully under control the processes in the brain cortex, from those which give impression to be “voluntary”, but are mainly subjects of a posteriori rationalizations, up to ecstatic outpourings of poets.
The same applies to visual cognition. Just as humans have in the brain an image of a “universal face” as a module, they may carry other similar compact modules for various entities of prime Darwinian relevance. Yet the majority of visual recognition consists in selection from, and recombination of, many finely grained modules. The classical studies of Hubel and Wiesel on cats provided evidence that innate mechanisms endow the visual system with highly specific connections.54 The visual system contains ready-made modules of horizontal and vertical lines, of edges, bars, colors. There is from them that a final image of an object is being assembled—seemingly to a form, which the object has in the external environment, but in fact constructed in the brain from the available ready-made parts, primordials. According to Fiser et al. “in both the developing and mature visual cortex, sensory evoked activity represents the modulation and triggering of ongoing circuit dynamics by input signals, rather than directly reflecting the structure of the input signal itself.”55 Similarly, from the fMRI on blind subjects Burton et al. inferred that “the brain largely operates intrinsically, with sensory information modulating rather than determining system operation.”56 This is why blind people perceive electric stimulation of specific areas of the brain cortex as visual flashes and deaf people electric stimulation of other specific areas as acoustic noise. One can paraphrase Plato with his concept of “knowledge as recollection” by saying that all our mental activities are just recalling and reuse of the stuff that evolution has implanted into our brain, and into the body as a whole. Even our repertoire of concepts may be already ready-made and, in our lifetime, we may just work to make the preformed concepts more distinct and precise. Our capability of rich and detailed conceptual grasping of phenomena may simply reflect our capability to grasp by hand an object of almost any shape, thanks to the large number of very fine motor modules that give to the human hand its exceptional flexibility.
One of the most convincing arguments for the thesis that we are able to cognize in our environment only items for which we are internally set up, is the explanation of human empathy by the activation of mirror neurons. We understand what our neighbor is feeling only if the observation of joy or pain of another person activates in our brain the same neural circuitry that would be activated if we ourselves experienced the joy or the pain. Psychopaths seem to be deficient in this faculty.57
Energetics of the Brain: An Engine that Is Running Continuously at Full Speed
Maintaining brain arrays, set up in evolution or in brain ontogenesis (and “prescribed” by genes and imprinting, respectively) and generating new alternative states resemble maintaining arrays of the immune system and generating new antibodies. In both cases, they depend on continual supply of energy. Just as in the case of the immune system it is not possible to “tailor” ad hoc an antibody fitting its corresponding antigen, as it would require “astronomically” much energy, it is energetically impossible for a brain to carry out any cognitive act ab initio.
The total number of brain states, determined by genes, imprinting and steadily running variations, is apparently still much higher than is the number of states of the immune system. One would suppose that the energetic demand of the central nervous system would be larger than that of the immune system. This, indeed, is the case. It is generally known that the human brain, with the weight of about 2% of the body, utilizes 20% of energy dissipated by the body (and even 60% in the first year of life; in adult apes it is only 8%). When calculated per gram of weight, the human brain consumes as much energy as the heart muscle.58 It is roughly sixteen times more than is the consumption of the skeletal muscle at rest,59 or as much as is the consumption of leg muscles in the coarse of a Marathon race.60
Attwell and Laughlin provided a comprehensive analysis of energy balance of the brain.60 They limited their analysis to the cortex gray matter, in which 90% of all neurons are excitatory with glutamate as neurotransmitter. According to the calculations, 85% of chemical energy of the cortex is used in neuronal transmission. These data, valid for the neocortex of the rat, have been recalculated with a similar result for the human neocortex.61
The results of Ames, who made calculations for the whole brain, are similar.58 Only 5– 15% of chemical energy is used for “vegetative metabolism”, chemical processes which brain cells have in common with other cells. 40–50% is used for gated membrane transport of Na+ ions, 3–7% for transport of Ca++, 10–20% for processing of neurotransmitters (uptake and synthesis of the transmitter; its concentration within vesicles; translocation, docking, exocytosis and subsequent endocyosis of the vesicles; reuptake and chemical conversion of the transmitter following its release into the synaptic cleft), 20–30% for intracellular signaling systems (the activation and deactivation of proteins, e.g., by phosporylation, and the formation and removal of substrates that act as second messengers, such as cAMP, cGMP, inositol compounds), and 20–30% for axonal and dendritic transport in both directions and for reshaping the cytoskeleton. The portion of energy required for reshaping the cytoskeleton is remarkable in view of a work of Bernstein et al., with cultured chick ciliary neurons: from indirect measurements they inferred that as much as 50% of total ATP consumed in this system was used in actin treadmilling.62 The calculations of Attwell and Lauhglin and of Ames concur with the measurement by magnetic resonance spectroscopy, which indicated that 80% activity of the human brain is due to cycling of glutamate as the major excitatory transmitter of the brain cortex.63 This neurotransmitter cycling flux was also high in a brain which was receiving no external stimuli; performing cognitive tasks and sensory stimulations increased neurotransmitter cycling by only 10–20%.64
The fact that cortical neurons exhibit intrinsic activity even without receiving input from the environment has been long known. This had been previously mostly accounted for as an expression of noise, in analogy with membrane channels which, when closed, undergo molecular noise. However, it has turned out that the activities are correlated between neurons within large areas of the cortex.65 It has been interpreted by assuming that, in the absence of external stimuli, cortical neurons are “wandering” across diverse brain states. At the mental level, this manifests itself a “wandering” of the mind, the purpose of which may be to maintain an optimal level of arousal, to lend a sense of coherence to one's past, present and future experiences, or simply to divide attention and to manage concurrent mental tasks.66 One would expect that a mental load, for instance solving a complex mathematical problem, would correspond to an equally large chemical load and this should entail a rise of metabolism and of consumption of chemical energy by the brain. Accordingly, the fact that in measurements of energetic balance of the whole brain no differences between the “resting” and “loaded” brain were observed, had long been a puzzle. The puzzle was solved when new imaging techniques enabled to measure local fluxes of energy in specific areas of the brain. In an area of the brain, which is involved in some specific mental activity, not only solving mathematical problems, but also for instance scrutinizing photographic images, playing a music instrument etc., blood flow or consumption of glucose or oxygen is going up, indicating enhancement of biochemical activity and hence of consumption of chemical energy. When an area is activated, another area is relatively less active.67
In analogy with the “dark energy” of the universe, of which we know little so far, Marcus Raichle called the energy, which does not serve in the brain to “processing” inputs from the environment but is used for intensive intrabrain (and intramental) activity, the “brain's dark energy”.68 Raichle considered several possible purposes of this intrinsic activity, which he designated as a “default” mode.69 One possibility is that it represents unconstrained, stimulus independent thought. Another possibility is that the intrinsic activity facilitates responses to stimuli by maintaining balance between excitation and inhibition and in this way increasing responsiveness (or gain) of neurons. The intrinsic activity may instantiates the maintenance of knowledge for interpreting, responding to, and even predicting environmental demands. Others have considered the endogenous activity as serving for processing and stabilization of memory.70 Terence Sejnowski has stressed that the explication of the high endogenous activity of the brain can be a way for understanding the nature of consciousness and of the first person experience.71 It should be remarked, however, that the high intrinsic activity in those areas of the brain which are responsible for the comprehension of mental states of other people, for moral reasoning, for self-referential behavior and for imagining the future was also observed in the brain of monkeys in deep anesthesia, and hence under conditions when the animals could not have been conscious of their states.72 Apparently, it is possible to dissociate the mental level from the chemical level, and the intrinsic activity continues to run unchanged at the latter.
It is well known that during sleep the brain is almost as active as it is in the wakeful state. A large portion of sleep is filled with dreams. There have always been the most varied speculations about the function and meaning of night dreaming. The prevailing view is that night dreams serve to consolidate the memory of events of the previous day. According to Allan Hobson, the night dream functions to some extent similarly as does the immune system: most varied fictive situations are being created by the brain, of which some may later prove to be useful in real life.73 It is conceivable that more than just consolidating and ordering the past the night dream is a preparation for future options: it would be largely fortuitous, but at the same time coordinated, setting up synaptic connections and cocktails of emotionally relevant chemicals in the brain. The same may apply to daydreaming. Eric Klinger maintains that the human mind devotes half of its wakeful state to daydreaming.74 Daydreaming comprises not only fanciful stories as in night dreams—in daydreaming mainly as sources of imaginary emotional satisfactions—but also dissections of past activities and planning future ones, and fleeting analyses of all possible and impossible alternatives of action—always in the form of coherent stories. However, at the basic material level, they are nothing else but chemical processes of the same nature as are the processes in the immune system. The only difference, but substantial of course, is that they can be perceived and conceptualized at the mental level.
This endogenous mental activity, translated from “mentalese” to “biochemese”, has a character of coherent “stories” at the biochemical level, too. In the ordinary language, one may be inclined to call the endogenous activity “spontaneous”; however, it is not spontaneous thermodynamically, but is the main consumer of chemical energy. Without the intense endogenous activity, structures of the complicated edifice of the brain would steadily lose correlations and order would be fast diminishing; the very high thermodynamic height would lose its altitude. At the same time, the intensive endogenous operation keeps the brain ready to respond promptly to external challenges. The brain resembles a car, with the engine always switched on, whether the car is running or standing still. The blood, circulating in the brain at a speed of more than half a liter per minute, copiously supplies the brain with fuel and oil, but it also functions as a cooling fluid: the high power of the brain needs efficient cooling.75
Inadequacy of the Computer Metaphor
Maintaining and generating brain states consists of many chemical processes running in an intricately structured system. If two chemical reactions are identical but proceed in different structural contexts, they represent two distinct chemical events. Accordingly, vast number of chemical events takes place in the brain at any single moment. Most of them are endergonic, consuming chemical energy. Researchers, who consider brains as computers, speak of elementary computational operations rather than of chemical events. They have calculated that as many as 1015 computations are executed in the brain in every second. Some computer scientists have estimated that in less than 12 years cheap personal computers would be available at the market, which would possess computing capacity equal to that of the human brain.
This reasoning is completely misplaced. If the brain is a machine, it is a chemical machine, serving a single purpose. Its only goal is to ensure sustenance and reproduction of its bearer, the individual organism. Chemical processes running in the brain are its semantics. In contrast, computers are universal, syntactic, machines (in the sense of Turing), destined to accomplish arbitrary computations, using programs and data inputted by a human subject. They are human exosomatic organs.12 A computer processes inputs and, after ending the program, stops—what a diametrical contrast to the brain, which incessantly, day and night, is running “at full speed”! Robots, which will be equipped with such powerful computers that they will have the ability to self-replicate and will be (if one took self-replication as a distinct marker of life) alive, will have nothing in common with natural life.
This conclusion can be supported by experience from two periods of the development of artificial intelligence (AI), which has been supposed to emulate brain functioning. Rodney Brooks provided an impressive picture from his personal career.76 The first period began more than 50 years ago, nowadays it is considered out-of-date, and is commonly called, following the proposal of John Haugeland from 1985, GOFAI (“good old-fashioned AI”). It was based on ideas of representational theory of mind. According to it, a living being is controlled by the mind as sort of a computer, carrying out computations over mental representations of the external world. A robot should be like a copy of such a living being. In GOFAI, the programmer endows the robot with a full description of its environment and with a list of explicit rules and instructions, which are used in computation. Later, Brooks has taken an alternative “situated” approach. It represents the second period of AI, we may label it NAI (“new AI”). The robot is equipped with way of reacting appropriately to its environment without a list of all its actions, supplied by the programmer. Memory of the actions, which proved advantageous, is being stored, so that the robot learns from experience; for this, in does not need a programmer. According to Brooks, cognitive systems are not passive computers, they are agents. They have no central control unit, their intelligence is built up gradually in a number of parallel processes, which are only loosely coupled and are coordinated by interactions with the environment. Other agents are part of the environment. Newly, David Gelernter, professor of computer science, came out with an impressive criticism of the common views in his circles.77 In his opinion, the brain is not a computer. Computers do not know or care what instructions they are executing, just as the oven does not care what it is baking; they deal with outward forms, not meanings. It is not our human case: by our brains, or, more appropriately, by our whole bodies, we experience the world, feel emotions. Chemistry makes us different from computers.
The process of protein folding may be the best illustration of the incommensurability of chemical processes, on which natural life is based, and digital computation. A simple protein folds into its native structure within milliseconds or at most seconds. In the process, free energy of interactions between amino acids, the sequence of which represents its primary denatured structure, is being minimized. Powerful digital computers can simulate by computation only a very short part of the natural process of folding, a hundreds of nanoseconds, six orders of magnitude less than nature performs. To multiply the computer capacity, international consortia are being formed, with the aim to harness untapped computing power from millions of personal computers around the world, connected by Internet (reviewed in ref. 78). When a computer is turned on but not in use, its idle time is being exploited to perform computations on data delivered by a coordinator of the simulation of folding. Not only enormous amount of time, but also massive energy dissipation is needed to simulate something that in nature proceeds fast and spontaneously. A protein is simply not a computer, although it has been often compared to, or even identified with it.
To equate brain with computer would mean to assign to the brain at least the same enormous computation power as possess all the Internet-connected computers simulating folding of a protein. Rather than being a computer, the brain is, as also is protein, a dynamical system.79,80 Just as the process of folding has an attractor—the minimum of free energy—there may be numerous attractors in the brain, some set down nomically, by natural laws, some teleonomically, by genes, imprinting and perhaps by other as yet undiscovered determinants. The way, described above, by which the human brain recognizes an individual human face through a built-in prototype of the universal face may be a paradigmatic example: the prototype face would function as an attractor.
The arguments presented in this study indicate that neither GOFAI, nor NAI provide adequate models of work of the human mind, and hence of the brain. One cannot say that GOFAI and NAI represent two extreme, opposed, ideal types and that the human brain can be placed somewhere in between. We simply miss any plausible model. Rodney Brooks may have grasped it appropriately, when among the explanations why we have been unsuccessful so far, has listed the following one: “we might be missing something fundamental and currently unimagined in our models of biology”.81 This is not an appeal to resignation but to intensify our search. We can specify the impossible—this has been the primary aim of the present paper. In the space of the possible, we have to try the most numerous and the most various alternatives; to act in the same way as the two natural cognitive systems, the immune and the nervous systems, are operating. The more alternatives, the higher the probability that one of them will turn out to be right and become an extension of our knowledge.
Acknowledgements
This study was supported in part by grant 55005622 to Jozef Nosek from the Howard Hughes Medical Institute.
Footnotes
Previously published online as a Communicative & Integrative Biology E-publication: http://www.landesbioscience.com/journals/cib/article/6670
References
- 1.Keller EF. The century of the gene. Cambridge, MA: Harvard University Press; 2000. [Google Scholar]
- 2.Kennedy D. Triple play. Science. 2000;290:709. doi: 10.1126/science.290.5492.709. [DOI] [PubMed] [Google Scholar]
- 3.Wilson EO. Consilience: The Unity of Knowledge. New York, NY: Random House; 1998. [Google Scholar]
- 4.Daly HE, Townsend KN. Valuing the Earth: Economy, Ecology, Ethics. Cambridge, MA: MIT Press; 1993. [Google Scholar]
- 5.Luisi PL. Emergence in chemistry: Chemistry as the embodiment of emergence. Foundations Chem. 2002;4:183–200. [Google Scholar]
- 6.Mill JS. A System of Logic. New York, NY: Harper and Brothers; 1874. [Google Scholar]
- 7.Kovác L. Life, chemistry and cognition. EMBO Rep. 2006;7:562–566. doi: 10.1038/sj.embor.7400717. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Rose SPR. Human agency in the neurocentric age. EMBO Rep. 2005;6:1001–1005. doi: 10.1038/sj.embor.7400566. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Campbell DT. ‘Downward causation’ in hierarchically organized biological systems. In: Ayala FJ, Dobzhansky T, editors. Studies in the Philosophy of Biology. London: Macmillan; 1974. pp. 176–186. [Google Scholar]
- 10.Ellis GRF. Physics, complexity and causality. Nature. 2005;435:743. doi: 10.1038/435743a. [DOI] [PubMed] [Google Scholar]
- 11.Davies P. The physics of downward causation. In: Clayton P, Davies P, editors. The Re-Emergence of Emergence: The Emergentist Hypothesis from Science to Religion. Oxford: Oxford University Press; 2006. pp. 35–51. [Google Scholar]
- 12.Lotka AJ. Elements of Physical Biology. Baltimore: Williams and Wilkins; 1925. [Google Scholar]
- 13.Miller GA. The cognitive revolution: a historical perspective. Trends Cogn Sci. 2003;7:141–144. doi: 10.1016/s1364-6613(03)00029-9. [DOI] [PubMed] [Google Scholar]
- 14.Von Neumann J. The Computer and the Brain. New Haven, CT: Yale University Press; 1958. [Google Scholar]
- 15.Minsky M. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. New York, NY: Simon and Schuster; 2006. [Google Scholar]
- 16.Lwoff A. Biological order. Cambridge, MA: MIT Press; 1962. [Google Scholar]
- 17.Lakoff G, Johnson M. Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. New York, NY: Basic Books; 1999. [Google Scholar]
- 18.Clark A. Being There: Putting Brain, Body and World Together Again. Cambridge, MA: MIT Press; 1997. [Google Scholar]
- 19.Anderson ML. Embodied cognition: A field guide. Artif Intell. 2003;149:91–130. [Google Scholar]
- 20.Schrödinger E. What is Life? Cambridge, UK: Cambridge University Press; 1944. [Google Scholar]
- 21.Monod J. Le hasard et la nécessité. Paris: Édition du Seuil; 1970. [Google Scholar]
- 22.Prigogine I. Time, structure and fluctuations (Nobel lecture in chemistry) Science. 1978;201:777–785. doi: 10.1126/science.201.4358.777. [DOI] [PubMed] [Google Scholar]
- 23.Odum HT, Pinkerton RC. Time's speed regulator: The optimum efficiency for maximum output in physical and biological systems. Am Sci. 1955;43:331–343. [Google Scholar]
- 24.Schneider E, Kay J. Life as a manifestation of the second law of thermodynamics. Math Comp Model. 1994;19:25–48. [Google Scholar]
- 25.Kauffman S. Investigations. Oxford: Oxford University Press; 2000. [Google Scholar]
- 26.Swenson R. Spontaneous order, autocatakinetic closure, and the development of space-time. Annals NY Acad Sci. 2000;901:311–319. doi: 10.1111/j.1749-6632.2000.tb06290.x. [DOI] [PubMed] [Google Scholar]
- 27.Chaisson E. Cosmic Evolution: The Rise of Complexity in Nature. Cambridge, MA: Harvard University Press; 2001. [Google Scholar]
- 28.Lotka A. Contribution to the energetics of evolution. Proc Natl Acad Sci US. 1922;8:147–151. doi: 10.1073/pnas.8.6.147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Kovác L. Information and knowledge in biology. Time for reappraisal. Plant Signal Behav. 2007;2:65–73. doi: 10.4161/psb.2.2.4113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Jaynes ET. Probability Theory: The Logic of Science. Cambridge UK: Cambridge University Press; 2003. [Google Scholar]
- 31.Kuhn H. Origin of life and physics: Diversified macrostructure—Inducement to form information-carrying and knowledge-accumulating systems. J Res Develop. 1988;32:37–46. [Google Scholar]
- 32.Lloyd S, Pagels H. Complexity as thermodynamic depth. Ann Phys. 1988;188:186–213. [Google Scholar]
- 33.Smart J. The cosmic watermark hypothesis: Wigner's ladder. 2008. [ http://www.accelerationwatch.com/watermark.html]
- 34.Jerne NK, Roland E, Cazenave PA. Recurrent idiotypes and internal images. The EMBO J. 1982;1:243–247. doi: 10.1002/j.1460-2075.1982.tb01154.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Pauling L. A theory of the structure and process of formation of antibodies. J Amer Chem Soc. 1940;62:2643–2657. [Google Scholar]
- 36.Piattelli-Palmarini M. Evolution, selection and cognition: From “learning” to parameter setting in biology and in the study of language. Cognition. 1989;31:1–44. doi: 10.1016/0010-0277(89)90016-4. [DOI] [PubMed] [Google Scholar]
- 37.Derting TR, Compton S. Immune response, not immune maintenance, is energetically costly in wild white-footed mice (Peromyscus leucopus) Physiol Biochem Zool. 2003;76:744–752. doi: 10.1086/375662. [DOI] [PubMed] [Google Scholar]
- 38.Jerne NK. Antibodies and learning: Selection versus instruction. In: Quarton GC, Melnechuk T, Schmitt FO, editors. The Neurosciences: A Study Program. New York, NY: Rockefeller University Press; 1968. [Google Scholar]
- 39.Changeux JP. L'homme neuronal. Paris: Fayard; 1983. [Google Scholar]
- 40.Edelman GM. Neural Darwinism: The Theory of Neural Group Selection. New York, NY: Basic Books; 1987. [Google Scholar]
- 41.Quartz SR. The constructivist brain. Trends in Cogn Sci. 1999;3:48–57. doi: 10.1016/s1364-6613(98)01270-4. [DOI] [PubMed] [Google Scholar]
- 42.Trevathan WR. Human Birth. New York, NY: Aldine de Gruyter; 1987. [Google Scholar]
- 43.Calvin WH. The Ascent of Mind. New York, NY: Bantam Books; 1991. [Google Scholar]
- 44.Roth G, Dicke U. Evolution of the brain and intelligence. Trends in Cogn Sci. 2005;9:250–257. doi: 10.1016/j.tics.2005.03.005. [DOI] [PubMed] [Google Scholar]
- 45.Lorenz K. On Aggression. London: Methuen; 1966. [Google Scholar]
- 46.Leopold DA, Bondar IV, Giese MA. Norm-based face encoding by single neurons in the monkey temporal cortex. Nature. 2006;442:572–575. doi: 10.1038/nature04951. [DOI] [PubMed] [Google Scholar]
- 47.Bruce V, Young A. In the Eye of the Beholder: The Science of Face Perception. Oxford: Oxford Univ Press; 1998. [Google Scholar]
- 48.Ito M. Internal model visualized. Nature. 2000;403:153–154. doi: 10.1038/35003097. [DOI] [PubMed] [Google Scholar]
- 49.Imamizu H, Miyauchi S, Tamada T, Sasaki Y, Takino R, Pütz B, Yoshioka T, Kawato M. Human cerebral activity reflecting an acquired internal model of a new tool. Nature. 2000;403:192–195. doi: 10.1038/35003194. [DOI] [PubMed] [Google Scholar]
- 50.Benuskova L, Kasabov N. Computational Neurogenetic Modeling. New York, NY: Springer; 2007. [Google Scholar]
- 51.Thorpe SJ, Fabre-Thorpe M. Seeking categories in the brain. Science. 2001;291:260–263. doi: 10.1126/science.1058249. [DOI] [PubMed] [Google Scholar]
- 52.Baker BS, Taylor BJ, Hall JC. Are complex behaviors specified by dedicated regulatory genes? Reasoning from Drosophila. Cell. 2001;105:13–24. doi: 10.1016/s0092-8674(01)00293-8. [DOI] [PubMed] [Google Scholar]
- 53.Mehren JE, Ejima A, Griffith LC. Unconventional sex: Fresh approaches to courtship learning. Curr Opin Neurobiol. 2004;14:745–750. doi: 10.1016/j.conb.2004.10.012. [DOI] [PubMed] [Google Scholar]
- 54.Wiesel TN. Postnatal development of the visual cortex and the influence of environment. Nature. 1982;299:583–591. doi: 10.1038/299583a0. [DOI] [PubMed] [Google Scholar]
- 55.Fiser J, Chiu C, Weliky M. Small modulation of ongoing cortical dynamics by sensory input during natural vision. Nature. 2004;431:573–578. doi: 10.1038/nature02907. [DOI] [PubMed] [Google Scholar]
- 56.Burton H, Snyder AZ, Raichle ME. Default brain functionality in blind people. Proc Natl Acad Sci US. 2004;101:15500–15505. doi: 10.1073/pnas.0406676101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Abbott A. Scanning psychopaths. Nature. 2007;450:942–944. doi: 10.1038/450942a. [DOI] [PubMed] [Google Scholar]
- 58.Ames A. CNS energy metabolism as related to function. Brain Res Rev. 2000;34:42–68. doi: 10.1016/s0165-0173(00)00038-2. [DOI] [PubMed] [Google Scholar]
- 59.Leonard WR, Robertson ML, Snodgrass JJ, Kuzawa CW. Metabolic correlates of hominid brain evolution. Compar Biochem Physiol A. 2003;136:5–15. doi: 10.1016/s1095-6433(03)00132-6. [DOI] [PubMed] [Google Scholar]
- 60.Attwell D, Laughlin SB. An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow and Metab. 2001;21:1133–1145. doi: 10.1097/00004647-200110000-00001. [DOI] [PubMed] [Google Scholar]
- 61.Lennie P. The cost of cortical computation. Curr Biol. 2003;13:493–497. doi: 10.1016/s0960-9822(03)00135-0. [DOI] [PubMed] [Google Scholar]
- 62.Bernstein BW, Bamburg JR. Actin-ATP hydrolysis is a major energy drain for neurons. J Neurosci. 2003;23:1–6. doi: 10.1523/JNEUROSCI.23-01-00002.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Shulman RG, Rothman DL, Behar KL, Hyder F. Energetic basis of brain activity: implications for neuroimaging. Trends in Neurosci. 2004;27:489–495. doi: 10.1016/j.tins.2004.06.005. [DOI] [PubMed] [Google Scholar]
- 64.Shulman RG, Rothman DL. Interpreting functional imaging studies in terms of neurotransmitter cycling. Proc Natl Acad Sci US. 1998;95:11993–11998. doi: 10.1073/pnas.95.20.11993. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Tsodyks M, Kenet T, Grinvald A, Arieli A. Linking spontaneous activity of single cortical neurons and the underlying functional architecture. Science. 1999;286:1943–1946. doi: 10.1126/science.286.5446.1943. [DOI] [PubMed] [Google Scholar]
- 66.Mason MF, Norton MI, Van Horn JD, Wegner DM, Grafton ST, Macrae CN. Wandering minds: The default network and stimulus-independent thought. Science. 2007;315:393–395. doi: 10.1126/science.1131295. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Greicius MD, Krasnow B, Reiss AL, Menon V. Functional connectivity in the resting brain: A network analysis of the default mode hypothesis. Proc Natl Acad Sci USA. 2003;100:253–258. doi: 10.1073/pnas.0135058100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Raichle ME. The brain's dark energy. Science. 2006;314:1249–1250. [PubMed] [Google Scholar]
- 69.Raichle ME, Gusnard DA. Intrinsic brain activity sets the stage for expression of motivated behavior. J Compar Neurol. 2005;493:167–176. doi: 10.1002/cne.20752. [DOI] [PubMed] [Google Scholar]
- 70.Miall RC, Robertson EM. Functional imaging: In the resting brain resting? Curr Biol. 2006;16:998–1000. doi: 10.1016/j.cub.2006.10.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Sejnowski T. The computational self. Ann NY Acad Sci. 2003;1001:262–271. doi: 10.1196/annals.1279.015. [DOI] [PubMed] [Google Scholar]
- 72.Vincent JL, Patel GH, Fox MD, Snyder AZ, Baker JT, Van Essen DC, Zempler JM, Snyder LH, Corbetta M, Raichle ME. Intrinsic functional architecture in the anaesthetized monkey brain. Nature. 2007;447:83–86. doi: 10.1038/nature05758. [DOI] [PubMed] [Google Scholar]
- 73.Hobson A. The Chemistry of Conscious States. New York, NY: Little and Brown; 1994. [Google Scholar]
- 74.Klinger E. Daydreaming. Los Angeles: Tarcher; 1990. [Google Scholar]
- 75.Falk D. Braindance. New York, NY: Holt; 1992. [Google Scholar]
- 76.Brooks RA. Flesh and Machines: How Robots will Change Us. New York, NY: Pantheon; 2002. [Google Scholar]
- 77.Gelernter D. Artificial intelligence is lost in the woods. MIT Technol Rev. 2007 [Google Scholar]
- 78.Duan Y, Kollman PA. Deep computing for the life sciences. IBM Systems J. 2001;40:297–309. [Google Scholar]
- 79.Carello C, Turvey MT, Kugler PN, Shaw RE. Inadequacies of the computer metaphor. In: Gazzaniga MS, editor. Handbook of Cognitive Neuroscience. New York, NY: Plenum Press; 1984. pp. 229–248. [Google Scholar]
- 80.Van Gelder TJ. The dynamical hypothesis in cognitive science. Behav Brain Sci. 1998;21:1–14. doi: 10.1017/s0140525x98001733. [DOI] [PubMed] [Google Scholar]
- 81.Brooks RA. The relationship between matter and life. Nature. 2001;409:409–411. doi: 10.1038/35053196. [DOI] [PubMed] [Google Scholar]